Jan 29 15:28:16 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 15:28:16 crc restorecon[4775]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:16 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 15:28:17 crc restorecon[4775]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 15:28:18 crc kubenswrapper[4835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:28:18 crc kubenswrapper[4835]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 15:28:18 crc kubenswrapper[4835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:28:18 crc kubenswrapper[4835]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:28:18 crc kubenswrapper[4835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 15:28:18 crc kubenswrapper[4835]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.264716 4835 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272631 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272672 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272677 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272681 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272685 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272691 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272697 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272702 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272707 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272711 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272715 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272719 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272723 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272728 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272732 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272737 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272741 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272745 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272749 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272761 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272765 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272769 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272772 4835 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272776 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272780 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272784 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272787 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272791 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272795 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272798 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272802 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272806 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272810 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272814 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272817 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272821 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272825 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272829 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272833 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272837 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272841 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272844 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272850 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272855 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272858 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272864 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272867 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272871 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272875 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272880 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272886 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272890 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272894 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272898 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272902 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272906 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272909 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272912 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272916 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272919 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272923 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272926 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272930 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272933 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272937 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272940 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272944 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272948 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272953 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272958 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.272962 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273075 4835 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273087 4835 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273098 4835 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273105 4835 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273113 4835 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273117 4835 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273124 4835 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273130 4835 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273136 4835 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273141 4835 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273147 4835 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273153 4835 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273157 4835 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273161 4835 flags.go:64] FLAG: --cgroup-root="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273165 4835 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273170 4835 flags.go:64] FLAG: --client-ca-file="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273174 4835 flags.go:64] FLAG: --cloud-config="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273178 4835 flags.go:64] FLAG: --cloud-provider="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273182 4835 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273187 4835 flags.go:64] FLAG: --cluster-domain="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273192 4835 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273196 4835 flags.go:64] FLAG: --config-dir="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273201 4835 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273206 4835 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273214 4835 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273218 4835 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273223 4835 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273228 4835 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273233 4835 flags.go:64] FLAG: --contention-profiling="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273237 4835 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273241 4835 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273246 4835 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273250 4835 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273256 4835 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273260 4835 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273264 4835 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273269 4835 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273273 4835 flags.go:64] FLAG: --enable-server="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273277 4835 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273284 4835 flags.go:64] FLAG: --event-burst="100" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273288 4835 flags.go:64] FLAG: --event-qps="50" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273292 4835 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273297 4835 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273301 4835 flags.go:64] FLAG: --eviction-hard="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273307 4835 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273311 4835 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273316 4835 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273321 4835 flags.go:64] FLAG: --eviction-soft="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273325 4835 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273329 4835 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273333 4835 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273337 4835 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273342 4835 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273346 4835 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273350 4835 flags.go:64] FLAG: --feature-gates="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273356 4835 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273360 4835 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273364 4835 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273369 4835 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273373 4835 flags.go:64] FLAG: --healthz-port="10248" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273377 4835 flags.go:64] FLAG: --help="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273382 4835 flags.go:64] FLAG: --hostname-override="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273386 4835 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273391 4835 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273396 4835 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273401 4835 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273405 4835 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273409 4835 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273413 4835 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273417 4835 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273421 4835 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273426 4835 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273430 4835 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273434 4835 flags.go:64] FLAG: --kube-reserved="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273440 4835 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273444 4835 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273449 4835 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273453 4835 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273457 4835 flags.go:64] FLAG: --lock-file="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273461 4835 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273483 4835 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273487 4835 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273496 4835 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273501 4835 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273505 4835 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273509 4835 flags.go:64] FLAG: --logging-format="text" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273513 4835 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273518 4835 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273522 4835 flags.go:64] FLAG: --manifest-url="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273528 4835 flags.go:64] FLAG: --manifest-url-header="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273541 4835 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273546 4835 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273552 4835 flags.go:64] FLAG: --max-pods="110" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273556 4835 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273560 4835 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273564 4835 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273569 4835 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273573 4835 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273578 4835 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273582 4835 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273614 4835 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273618 4835 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273623 4835 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273627 4835 flags.go:64] FLAG: --pod-cidr="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273632 4835 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273640 4835 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273644 4835 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273649 4835 flags.go:64] FLAG: --pods-per-core="0" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273654 4835 flags.go:64] FLAG: --port="10250" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273658 4835 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273663 4835 flags.go:64] FLAG: --provider-id="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273667 4835 flags.go:64] FLAG: --qos-reserved="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273671 4835 flags.go:64] FLAG: --read-only-port="10255" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273675 4835 flags.go:64] FLAG: --register-node="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273680 4835 flags.go:64] FLAG: --register-schedulable="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273684 4835 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273692 4835 flags.go:64] FLAG: --registry-burst="10" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273696 4835 flags.go:64] FLAG: --registry-qps="5" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273700 4835 flags.go:64] FLAG: --reserved-cpus="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273706 4835 flags.go:64] FLAG: --reserved-memory="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273711 4835 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273716 4835 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273720 4835 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273725 4835 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273728 4835 flags.go:64] FLAG: --runonce="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273733 4835 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273737 4835 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273741 4835 flags.go:64] FLAG: --seccomp-default="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273746 4835 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273750 4835 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273754 4835 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273759 4835 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273763 4835 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273767 4835 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273771 4835 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273781 4835 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273785 4835 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273790 4835 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273795 4835 flags.go:64] FLAG: --system-cgroups="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273799 4835 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273806 4835 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273810 4835 flags.go:64] FLAG: --tls-cert-file="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273818 4835 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273823 4835 flags.go:64] FLAG: --tls-min-version="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273827 4835 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273831 4835 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273835 4835 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273839 4835 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273844 4835 flags.go:64] FLAG: --v="2" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273851 4835 flags.go:64] FLAG: --version="false" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273858 4835 flags.go:64] FLAG: --vmodule="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273864 4835 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.273868 4835 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.273999 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274004 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274008 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274012 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274016 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274021 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274024 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274028 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274032 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274036 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274040 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274044 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274049 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274052 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274056 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274060 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274064 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274068 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274072 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274075 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274079 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274084 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274088 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274092 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274095 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274098 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274102 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274105 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274109 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274112 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274115 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274119 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274124 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274129 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274132 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274136 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274139 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274142 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274148 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274153 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274157 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274160 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274164 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274168 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274171 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274175 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274178 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274182 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274186 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274189 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274195 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274199 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274203 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274209 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274214 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274217 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274221 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274225 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274229 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274233 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274237 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274240 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274244 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274248 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274251 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274255 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274258 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274262 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274265 4835 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274269 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.274272 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.274281 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.286215 4835 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.286288 4835 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286513 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286536 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286550 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286562 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286574 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286585 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286596 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286606 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286617 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286627 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286638 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286647 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286657 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286668 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286678 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286688 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286698 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286708 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286718 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286729 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286739 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286749 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286763 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286781 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286796 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286807 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286821 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286833 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286844 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286860 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286871 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286881 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286891 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286901 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286912 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286922 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286931 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286941 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286951 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286963 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286973 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286986 4835 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.286997 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287008 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287018 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287028 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287038 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287052 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287064 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287075 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287086 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287096 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287106 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287117 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287128 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287139 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287151 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287162 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287172 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287182 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287193 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287202 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287212 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287222 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287233 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287246 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287257 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287269 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287279 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287290 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287303 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.287324 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287692 4835 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287719 4835 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287735 4835 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287747 4835 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287758 4835 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287769 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287779 4835 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287789 4835 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287800 4835 feature_gate.go:330] unrecognized feature gate: Example Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287811 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287821 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287832 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287841 4835 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287853 4835 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287864 4835 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287875 4835 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287885 4835 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287895 4835 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287905 4835 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287917 4835 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287928 4835 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287939 4835 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287949 4835 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287959 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287970 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287980 4835 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.287990 4835 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288001 4835 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288015 4835 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288029 4835 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288039 4835 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288050 4835 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288060 4835 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288074 4835 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288087 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288100 4835 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288110 4835 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288121 4835 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288131 4835 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288141 4835 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288152 4835 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288162 4835 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288172 4835 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288182 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288192 4835 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288204 4835 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288214 4835 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288225 4835 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288234 4835 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288245 4835 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288255 4835 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288265 4835 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288275 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288286 4835 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288299 4835 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288312 4835 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288324 4835 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288335 4835 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288346 4835 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288356 4835 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288368 4835 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288378 4835 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288388 4835 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288398 4835 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288409 4835 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288421 4835 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288432 4835 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288442 4835 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288452 4835 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288462 4835 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.288505 4835 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.288522 4835 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.289765 4835 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.294669 4835 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.294806 4835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.296530 4835 server.go:997] "Starting client certificate rotation" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.296559 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.296753 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-13 07:24:06.859499307 +0000 UTC Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.296902 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.330026 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.332258 4835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.335096 4835 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.348623 4835 log.go:25] "Validated CRI v1 runtime API" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.378726 4835 log.go:25] "Validated CRI v1 image API" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.380858 4835 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.385534 4835 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-15-23-15-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.385574 4835 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.404438 4835 manager.go:217] Machine: {Timestamp:2026-01-29 15:28:18.401313198 +0000 UTC m=+0.594356882 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4955aaf8-d9bd-4384-b65d-ab2349d3a649 BootID:40fe619e-6036-48d9-81f7-6a2798eec2f7 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:75:df:8d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:75:df:8d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:79:40:93 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:88:42:e8 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:83:58:0a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:db:53:6e Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:43:d8:e0 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2e:21:99:bd:c4:47 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:16:90:e4:e9:18:b4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.404748 4835 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.404949 4835 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.405318 4835 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.405571 4835 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.405620 4835 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.405905 4835 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.405922 4835 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.406394 4835 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.406433 4835 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.406698 4835 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.406801 4835 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.410607 4835 kubelet.go:418] "Attempting to sync node with API server" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.410639 4835 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.410698 4835 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.410716 4835 kubelet.go:324] "Adding apiserver pod source" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.410733 4835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.415251 4835 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.415980 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.416132 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.415978 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.416263 4835 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.416268 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.421193 4835 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422614 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422636 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422644 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422651 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422662 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422670 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422678 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422714 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422725 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422731 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422753 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.422759 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.423706 4835 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.424190 4835 server.go:1280] "Started kubelet" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.425488 4835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.425528 4835 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.425844 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:18 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426140 4835 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426511 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426538 4835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426569 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 17:35:10.350529005 +0000 UTC Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.426744 4835 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426795 4835 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426820 4835 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.426919 4835 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.436030 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.436232 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.436825 4835 factory.go:55] Registering systemd factory Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.436871 4835 factory.go:221] Registration of the systemd container factory successfully Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.437307 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.437746 4835 factory.go:153] Registering CRI-O factory Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.437799 4835 factory.go:221] Registration of the crio container factory successfully Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.437943 4835 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.438151 4835 factory.go:103] Registering Raw factory Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.438188 4835 manager.go:1196] Started watching for new ooms in manager Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.441899 4835 server.go:460] "Adding debug handlers to kubelet server" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.442320 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3d3a1f67480b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:28:18.424154123 +0000 UTC m=+0.617197777,LastTimestamp:2026-01-29 15:28:18.424154123 +0000 UTC m=+0.617197777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.445198 4835 manager.go:319] Starting recovery of all containers Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446838 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446878 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446892 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446904 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446914 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446926 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446937 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446947 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446958 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446968 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446978 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.446991 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447001 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447036 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447048 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447057 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447069 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447080 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447090 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447100 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447110 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447120 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447130 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447139 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447150 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447164 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447194 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447207 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447218 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447227 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447237 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447246 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447255 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447265 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447274 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447283 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447295 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447305 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447315 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447326 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447337 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447347 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447359 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447390 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447401 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447413 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447426 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447436 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447448 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447459 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447492 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447502 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447521 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447532 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447543 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447555 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447566 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447577 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447587 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447598 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447608 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447618 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447630 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447641 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447651 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447662 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447672 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447683 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447692 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447701 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447711 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447721 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447730 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447739 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447749 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447759 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447769 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447779 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447788 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447799 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447808 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447817 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447827 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447837 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447847 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447857 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447866 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447876 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447885 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447894 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447904 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447913 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447922 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447933 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447944 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447957 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447967 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447976 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447985 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.447995 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448004 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448013 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448022 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448031 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448045 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448056 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448069 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448079 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.448089 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449869 4835 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449891 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449905 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449915 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449926 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449937 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449946 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449956 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449965 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449973 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449982 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.449991 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450001 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450009 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450019 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450030 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450038 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450047 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450057 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450066 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450079 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450089 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450104 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450115 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450123 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450132 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450143 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450151 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450161 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450169 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450178 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450188 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450197 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450206 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450218 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450227 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450238 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450247 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450267 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450277 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450289 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450299 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450310 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450320 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450330 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450339 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450349 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450358 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450367 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450377 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450388 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450399 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450412 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450423 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450433 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450443 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450452 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450461 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450483 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450493 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450503 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450513 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450524 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450534 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450543 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450553 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450566 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450577 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450587 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450596 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450607 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450617 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450627 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450636 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450645 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450654 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450663 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450672 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450681 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450692 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450702 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450712 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450723 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450732 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450742 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450753 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450767 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450777 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450787 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450798 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450807 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450817 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450828 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450838 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450848 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450882 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450893 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450902 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450913 4835 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450923 4835 reconstruct.go:97] "Volume reconstruction finished" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.450931 4835 reconciler.go:26] "Reconciler: start to sync state" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.461877 4835 manager.go:324] Recovery completed Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.473157 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.475055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.475121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.475135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.476368 4835 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.476387 4835 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.476415 4835 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.485123 4835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.489170 4835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.490604 4835 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.490714 4835 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.490725 4835 policy_none.go:49] "None policy: Start" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.491361 4835 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.491591 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.491705 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.493795 4835 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.493854 4835 state_mem.go:35] "Initializing new in-memory state store" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.527545 4835 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.555873 4835 manager.go:334] "Starting Device Plugin manager" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.555944 4835 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.555960 4835 server.go:79] "Starting device plugin registration server" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.556536 4835 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.556557 4835 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.556795 4835 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.556872 4835 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.556880 4835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.568617 4835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.592300 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.592416 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.593567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.593615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.593627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.593858 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.594803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.594837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.594846 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.595068 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.595146 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.595144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.595302 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.595095 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.596817 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597054 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597123 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597478 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597591 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597942 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.597979 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598507 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598684 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598710 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.598991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.599002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.599264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.599285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.599299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.638740 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.653544 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.653802 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.653947 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654055 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654274 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654363 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654444 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654629 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654686 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654713 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654739 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654790 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654841 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.654867 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.657412 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.658661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.658725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.658746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.658809 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.659458 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.755862 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756221 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756340 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756574 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757180 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757362 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757449 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757559 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757648 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.756106 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757837 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757909 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757956 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.757971 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758035 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758042 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758103 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758091 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758150 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758172 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758276 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758185 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.758213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.859735 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.861695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.861756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.861774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.861848 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:18 crc kubenswrapper[4835]: E0129 15:28:18.862426 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.942460 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.968006 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: I0129 15:28:18.979899 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 15:28:18 crc kubenswrapper[4835]: W0129 15:28:18.992540 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b747d0393b21cfd06807d001d2587948bcb3adfbbd33f8c9606820577841b7f9 WatchSource:0}: Error finding container b747d0393b21cfd06807d001d2587948bcb3adfbbd33f8c9606820577841b7f9: Status 404 returned error can't find the container with id b747d0393b21cfd06807d001d2587948bcb3adfbbd33f8c9606820577841b7f9 Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.006551 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.010523 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-10406a1fb6b0aeb3ac38e4f32855b98f319f73af6e1e8b953a38bcffd76dd5f5 WatchSource:0}: Error finding container 10406a1fb6b0aeb3ac38e4f32855b98f319f73af6e1e8b953a38bcffd76dd5f5: Status 404 returned error can't find the container with id 10406a1fb6b0aeb3ac38e4f32855b98f319f73af6e1e8b953a38bcffd76dd5f5 Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.010695 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.013029 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d5f8d625a207d5ee2ad2176d23b37ed0d66c1e1c7077cda7186d49151ed051e7 WatchSource:0}: Error finding container d5f8d625a207d5ee2ad2176d23b37ed0d66c1e1c7077cda7186d49151ed051e7: Status 404 returned error can't find the container with id d5f8d625a207d5ee2ad2176d23b37ed0d66c1e1c7077cda7186d49151ed051e7 Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.034718 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-da71c256121cade0919a489b7de12b826f49175cf48cbefaefec23d4c575e34e WatchSource:0}: Error finding container da71c256121cade0919a489b7de12b826f49175cf48cbefaefec23d4c575e34e: Status 404 returned error can't find the container with id da71c256121cade0919a489b7de12b826f49175cf48cbefaefec23d4c575e34e Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.039691 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.051764 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7601a68864693d96c3fbfb76ed052f4eb5ae3f8286091b1269544f00fc7023b2 WatchSource:0}: Error finding container 7601a68864693d96c3fbfb76ed052f4eb5ae3f8286091b1269544f00fc7023b2: Status 404 returned error can't find the container with id 7601a68864693d96c3fbfb76ed052f4eb5ae3f8286091b1269544f00fc7023b2 Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.262782 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.264125 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.264173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.264189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.264225 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.264891 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.290522 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f3d3a1f67480b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:28:18.424154123 +0000 UTC m=+0.617197777,LastTimestamp:2026-01-29 15:28:18.424154123 +0000 UTC m=+0.617197777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.426922 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 04:11:39.259298808 +0000 UTC Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.427549 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.474869 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.474994 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.498280 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b747d0393b21cfd06807d001d2587948bcb3adfbbd33f8c9606820577841b7f9"} Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.499419 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7601a68864693d96c3fbfb76ed052f4eb5ae3f8286091b1269544f00fc7023b2"} Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.500930 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"da71c256121cade0919a489b7de12b826f49175cf48cbefaefec23d4c575e34e"} Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.502518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d5f8d625a207d5ee2ad2176d23b37ed0d66c1e1c7077cda7186d49151ed051e7"} Jan 29 15:28:19 crc kubenswrapper[4835]: I0129 15:28:19.503918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"10406a1fb6b0aeb3ac38e4f32855b98f319f73af6e1e8b953a38bcffd76dd5f5"} Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.626369 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.626460 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.826143 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.826738 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.841072 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Jan 29 15:28:19 crc kubenswrapper[4835]: W0129 15:28:19.909945 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:19 crc kubenswrapper[4835]: E0129 15:28:19.910027 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.065990 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.068770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.068842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.068864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.068912 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:20 crc kubenswrapper[4835]: E0129 15:28:20.069756 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.341384 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:28:20 crc kubenswrapper[4835]: E0129 15:28:20.342809 4835 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.427170 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:28:25.476573374 +0000 UTC Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.427607 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.511042 4835 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521" exitCode=0 Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.511122 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.511221 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.512722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.512785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.512807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.513892 4835 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33" exitCode=0 Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.513948 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.513989 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.514779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.514818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.514831 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.517132 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.517216 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.517237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.519428 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b" exitCode=0 Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.519541 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.519609 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.520760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.520793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.520802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.522382 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa" exitCode=0 Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.522415 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa"} Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.522662 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.523871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.523916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.523934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.526562 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.528034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.528088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:20 crc kubenswrapper[4835]: I0129 15:28:20.528108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.428780 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 03:28:33.643901523 +0000 UTC Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.429422 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:21 crc kubenswrapper[4835]: E0129 15:28:21.442001 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.528073 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.528126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.528143 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.529390 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e" exitCode=0 Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.529444 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.529602 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.530565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.530596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.530609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.534131 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.534209 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.534996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.535021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.535032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.542336 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.542369 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.542386 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.542488 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.543376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.543405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.543417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.546329 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2"} Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.546415 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.547221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.547256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.547269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.670936 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.672438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.672500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.672513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.672540 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:21 crc kubenswrapper[4835]: E0129 15:28:21.673096 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.144:6443: connect: connection refused" node="crc" Jan 29 15:28:21 crc kubenswrapper[4835]: I0129 15:28:21.704550 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:28:22 crc kubenswrapper[4835]: W0129 15:28:22.254666 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:22 crc kubenswrapper[4835]: E0129 15:28:22.254798 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.427409 4835 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.428889 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:23:22.551053066 +0000 UTC Jan 29 15:28:22 crc kubenswrapper[4835]: W0129 15:28:22.442886 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.144:6443: connect: connection refused Jan 29 15:28:22 crc kubenswrapper[4835]: E0129 15:28:22.443044 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.144:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.551445 4835 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f" exitCode=0 Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.551504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f"} Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.551674 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.553058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.553089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.553102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.554616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0"} Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.554652 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792"} Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.554672 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.554713 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.554735 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.554713 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.555989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.556039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.556058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.556001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.556134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:22 crc kubenswrapper[4835]: I0129 15:28:22.556151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.430017 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 09:13:06.350751396 +0000 UTC Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.564859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb"} Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.564942 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.564954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2"} Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.564983 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4"} Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.564993 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.565180 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.566525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.566569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.566585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.566662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.566699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:23 crc kubenswrapper[4835]: I0129 15:28:23.566719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.430154 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:54:52.429793269 +0000 UTC Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.488537 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.578281 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc"} Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.578344 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14"} Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.578400 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.578423 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.580046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.580092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.580107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.580641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.580669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.580681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.587049 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.587295 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.588913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.588957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.588969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.873962 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.875811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.875860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.875877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.875908 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:24 crc kubenswrapper[4835]: I0129 15:28:24.879800 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.430934 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:16:15.293156249 +0000 UTC Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.581939 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.582049 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.583428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.583622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.583680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.583458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.583750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:25 crc kubenswrapper[4835]: I0129 15:28:25.583778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:26 crc kubenswrapper[4835]: I0129 15:28:26.431886 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:45:51.005405986 +0000 UTC Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.252751 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.253038 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.254934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.254996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.255022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.432673 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:25:14.25574987 +0000 UTC Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.487089 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.487291 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.488997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.489082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.489169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.593675 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.593907 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.595597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.595668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.595686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.603766 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.626614 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.626818 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.628858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.628908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:27 crc kubenswrapper[4835]: I0129 15:28:27.628935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.426629 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.433136 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:36:35.632178613 +0000 UTC Jan 29 15:28:28 crc kubenswrapper[4835]: E0129 15:28:28.568833 4835 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.591395 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.591493 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.591564 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.592799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.592837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.592847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.593619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.593658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:28 crc kubenswrapper[4835]: I0129 15:28:28.593671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[4835]: I0129 15:28:29.433420 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:08:52.207749337 +0000 UTC Jan 29 15:28:29 crc kubenswrapper[4835]: I0129 15:28:29.594685 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:29 crc kubenswrapper[4835]: I0129 15:28:29.596360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:29 crc kubenswrapper[4835]: I0129 15:28:29.596462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:29 crc kubenswrapper[4835]: I0129 15:28:29.596687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:29 crc kubenswrapper[4835]: I0129 15:28:29.601485 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.433888 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:55:44.192165253 +0000 UTC Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.487341 4835 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.487510 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.598565 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.600395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.600440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:30 crc kubenswrapper[4835]: I0129 15:28:30.600454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:31 crc kubenswrapper[4835]: I0129 15:28:31.434424 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:42:40.864094889 +0000 UTC Jan 29 15:28:32 crc kubenswrapper[4835]: I0129 15:28:32.434895 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 23:52:00.50276593 +0000 UTC Jan 29 15:28:32 crc kubenswrapper[4835]: W0129 15:28:32.728794 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 15:28:32 crc kubenswrapper[4835]: I0129 15:28:32.729676 4835 trace.go:236] Trace[593095291]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:28:22.727) (total time: 10002ms): Jan 29 15:28:32 crc kubenswrapper[4835]: Trace[593095291]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:28:32.728) Jan 29 15:28:32 crc kubenswrapper[4835]: Trace[593095291]: [10.002261586s] [10.002261586s] END Jan 29 15:28:32 crc kubenswrapper[4835]: E0129 15:28:32.729722 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 15:28:32 crc kubenswrapper[4835]: W0129 15:28:32.996480 4835 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 15:28:32 crc kubenswrapper[4835]: I0129 15:28:32.996592 4835 trace.go:236] Trace[1689813302]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:28:22.994) (total time: 10001ms): Jan 29 15:28:32 crc kubenswrapper[4835]: Trace[1689813302]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (15:28:32.996) Jan 29 15:28:32 crc kubenswrapper[4835]: Trace[1689813302]: [10.001842121s] [10.001842121s] END Jan 29 15:28:32 crc kubenswrapper[4835]: E0129 15:28:32.996620 4835 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.062186 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.062277 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.070447 4835 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.070561 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.435965 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:08:49.981988334 +0000 UTC Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.615251 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.617806 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0" exitCode=255 Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.617864 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0"} Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.618153 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.619544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.619579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.619590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:33 crc kubenswrapper[4835]: I0129 15:28:33.620146 4835 scope.go:117] "RemoveContainer" containerID="e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0" Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.116941 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.436340 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:18:15.879787298 +0000 UTC Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.631109 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.634331 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d"} Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.634454 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.635685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.635763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:34 crc kubenswrapper[4835]: I0129 15:28:34.635786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:35 crc kubenswrapper[4835]: I0129 15:28:35.436683 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:27:25.32002315 +0000 UTC Jan 29 15:28:35 crc kubenswrapper[4835]: I0129 15:28:35.637530 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:35 crc kubenswrapper[4835]: I0129 15:28:35.637654 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:35 crc kubenswrapper[4835]: I0129 15:28:35.639075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:35 crc kubenswrapper[4835]: I0129 15:28:35.639113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:35 crc kubenswrapper[4835]: I0129 15:28:35.639126 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:36 crc kubenswrapper[4835]: I0129 15:28:36.437695 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:01:17.831401225 +0000 UTC Jan 29 15:28:36 crc kubenswrapper[4835]: I0129 15:28:36.640788 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:36 crc kubenswrapper[4835]: I0129 15:28:36.641830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:36 crc kubenswrapper[4835]: I0129 15:28:36.641859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:36 crc kubenswrapper[4835]: I0129 15:28:36.641869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.259975 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.437971 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 19:05:51.958335243 +0000 UTC Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.643723 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.644992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.645048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.645060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:37 crc kubenswrapper[4835]: I0129 15:28:37.648001 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.036731 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.043745 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.048148 4835 trace.go:236] Trace[1068179856]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 15:28:25.726) (total time: 12321ms): Jan 29 15:28:38 crc kubenswrapper[4835]: Trace[1068179856]: ---"Objects listed" error: 12321ms (15:28:38.048) Jan 29 15:28:38 crc kubenswrapper[4835]: Trace[1068179856]: [12.321954391s] [12.321954391s] END Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.048176 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.049590 4835 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.051008 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.053032 4835 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.070284 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.091107 4835 csr.go:261] certificate signing request csr-mhmhq is approved, waiting to be issued Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.098542 4835 csr.go:257] certificate signing request csr-mhmhq is issued Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.296655 4835 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 15:28:38 crc kubenswrapper[4835]: W0129 15:28:38.296894 4835 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 15:28:38 crc kubenswrapper[4835]: W0129 15:28:38.296957 4835 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 15:28:38 crc kubenswrapper[4835]: W0129 15:28:38.297013 4835 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.297029 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": read tcp 38.102.83.144:43168->38.102.83.144:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-apiserver-crc.188f3d3a44346594 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:28:19.04157634 +0000 UTC m=+1.234620034,LastTimestamp:2026-01-29 15:28:19.04157634 +0000 UTC m=+1.234620034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.420348 4835 apiserver.go:52] "Watching apiserver" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.427933 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.428348 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-s2qgg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.428770 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.428806 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.428770 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.428921 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.428968 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.428910 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.429253 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.429515 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.429553 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.429654 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.431306 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.431539 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.431812 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.431862 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.432003 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.432106 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.432108 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.432232 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.432953 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.433336 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.433362 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.435698 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.438606 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:33:34.777492314 +0000 UTC Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.451829 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.454528 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.467083 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.473542 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.489799 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.527775 4835 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.528833 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.547092 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.555890 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.555968 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.555998 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556024 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556052 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556074 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556101 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556123 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556150 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556178 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556200 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556222 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556249 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556269 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556316 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556313 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556371 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556557 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556581 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556813 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.556956 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557024 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557091 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557102 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557167 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557241 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557256 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557334 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557329 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557409 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557405 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557516 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557561 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557601 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557624 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557648 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557319 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557666 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557740 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557383 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557626 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557631 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557774 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557645 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557733 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557805 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557832 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557836 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557927 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557948 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557968 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.557977 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558037 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558050 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558063 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558095 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558100 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558182 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558193 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558213 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558222 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558181 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558296 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558307 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558421 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558434 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558482 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558509 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558533 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558552 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558574 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558593 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558612 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558635 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558657 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558682 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558702 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558725 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558750 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558772 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558793 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558813 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558835 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558855 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558879 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558900 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558921 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558942 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558965 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558985 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559007 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559031 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559056 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559077 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559098 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559119 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559143 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559165 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559187 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559210 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559284 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559304 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559328 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559350 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559370 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559393 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559448 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560269 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560298 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560327 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560348 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560371 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560394 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560417 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560437 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560504 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560526 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560549 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560577 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560598 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560623 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560646 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560692 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560712 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560737 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560790 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560813 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560837 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560863 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560890 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560913 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560933 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560958 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560982 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561003 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561026 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561051 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561074 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561096 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561121 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561162 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561184 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561204 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561233 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561278 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561301 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561325 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561345 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561365 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561387 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561408 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561453 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561490 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561560 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561580 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561724 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561746 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561767 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561787 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561806 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561827 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561849 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561890 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561908 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561931 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561952 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561972 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561991 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562010 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562105 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562125 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562146 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562165 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562184 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562204 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562225 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562244 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562292 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562312 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562338 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562359 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562382 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562403 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562424 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562447 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562490 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562516 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562538 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562561 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562582 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562603 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562628 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562651 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562675 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562712 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562733 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562753 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562773 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562795 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562814 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562835 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562855 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562875 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562896 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562917 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.562959 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bbf2aeec-7417-429a-8f23-3f341546d92f-hosts-file\") pod \"node-resolver-s2qgg\" (UID: \"bbf2aeec-7417-429a-8f23-3f341546d92f\") " pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563043 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6mx\" (UniqueName: \"kubernetes.io/projected/bbf2aeec-7417-429a-8f23-3f341546d92f-kube-api-access-9r6mx\") pod \"node-resolver-s2qgg\" (UID: \"bbf2aeec-7417-429a-8f23-3f341546d92f\") " pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563076 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563097 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563127 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563170 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563193 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563215 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563237 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563281 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563303 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563330 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563352 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563373 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.563394 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.570432 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.570507 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575049 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575073 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558481 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575123 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575145 4835 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558600 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575164 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558686 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575180 4835 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575199 4835 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575217 4835 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575235 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575250 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575265 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575285 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578365 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578384 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578396 4835 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578407 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578453 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578488 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578508 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578521 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578533 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578544 4835 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578557 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578568 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578579 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578590 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578603 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.570483 4835 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575185 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558722 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558903 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.558913 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559029 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559173 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559178 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559210 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559284 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.559400 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560747 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560828 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560848 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.560930 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561091 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561210 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561385 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561355 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.561499 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.565739 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.565785 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.565989 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.566010 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.566148 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.566350 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.566671 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.566920 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567051 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567289 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567515 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567685 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567934 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.567987 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568088 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568124 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568255 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568400 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568605 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.568901 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.569125 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.569139 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.569636 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.570334 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.570674 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.570684 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575179 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575353 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575686 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575881 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.575900 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.576155 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.576207 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.576611 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.576979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.577103 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.577394 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.577659 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.577834 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.577861 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578016 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578363 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578538 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578918 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.578996 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.579216 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.579532 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.579641 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.579819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.579955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.580243 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.580346 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.580604 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.580759 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.580657 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.581100 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.581566 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.581703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.581781 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.581956 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.582272 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.582310 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.582326 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.582335 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.582394 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.582937 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583025 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.586901 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583300 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583664 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583820 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583996 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583852 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.584531 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.584892 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.585058 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:39.085035018 +0000 UTC m=+21.278078672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.585076 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.583050 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.585106 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.585191 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.585389 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.580375 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.587051 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.587413 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.587439 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.587509 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.587802 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.588663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.588711 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.590189 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.590332 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.590401 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.590627 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.591036 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.591273 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.586600 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.592888 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.593705 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.593925 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.594115 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.594200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.594568 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.599010 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.599973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.600294 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.600314 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.600350 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.600778 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.600794 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.601053 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.601068 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.601644 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.601849 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.602859 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.603307 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.603350 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.603433 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.604070 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.604414 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.604608 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.605044 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.605390 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.605416 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.606051 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.606230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.607796 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.608277 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.608388 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.608568 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.608955 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.609694 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.609821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.610368 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.610874 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.611746 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.613152 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.613545 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.613810 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.613957 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.614121 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.614156 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.614170 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.614572 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.614660 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:39.114571091 +0000 UTC m=+21.307614745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.615020 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:39.114999232 +0000 UTC m=+21.308042886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.615069 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:39.115061903 +0000 UTC m=+21.308105557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.615652 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.615658 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.615959 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.617701 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.621887 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.622273 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.622294 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.622308 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.622371 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:39.122354444 +0000 UTC m=+21.315398098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.629206 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.629776 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.631299 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.634082 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.637693 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.637821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.644561 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.646166 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.648794 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.653134 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.658261 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.670642 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gpcs8"] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.671234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.673938 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.674165 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.674016 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.674242 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.674308 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.676451 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.682882 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6mx\" (UniqueName: \"kubernetes.io/projected/bbf2aeec-7417-429a-8f23-3f341546d92f-kube-api-access-9r6mx\") pod \"node-resolver-s2qgg\" (UID: \"bbf2aeec-7417-429a-8f23-3f341546d92f\") " pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.682926 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bbf2aeec-7417-429a-8f23-3f341546d92f-hosts-file\") pod \"node-resolver-s2qgg\" (UID: \"bbf2aeec-7417-429a-8f23-3f341546d92f\") " pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.682963 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683044 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bbf2aeec-7417-429a-8f23-3f341546d92f-hosts-file\") pod \"node-resolver-s2qgg\" (UID: \"bbf2aeec-7417-429a-8f23-3f341546d92f\") " pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683101 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683119 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683130 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683141 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683150 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683159 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683169 4835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683179 4835 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683188 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683197 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683197 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683207 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683218 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683262 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683273 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683283 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683294 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683302 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683362 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683372 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683382 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683392 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683403 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683412 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683421 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683430 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683439 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683448 4835 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683457 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683481 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683490 4835 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683499 4835 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683517 4835 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683527 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683538 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683548 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683556 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683566 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683575 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683586 4835 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683609 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683620 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683629 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683641 4835 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683651 4835 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683661 4835 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683678 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683693 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683702 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683710 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683720 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683732 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683741 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683750 4835 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683759 4835 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683768 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683777 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683785 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683803 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683813 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683822 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683832 4835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683841 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683850 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683859 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683868 4835 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683878 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683887 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683898 4835 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683907 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683917 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683931 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683940 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683949 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683958 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683967 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683975 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683984 4835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.683994 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684003 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684012 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684021 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684030 4835 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684039 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684048 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684057 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684065 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684073 4835 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684082 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684091 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684117 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684126 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684135 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684145 4835 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684159 4835 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684167 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684176 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684186 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684194 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684202 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684216 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684225 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684233 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684241 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684251 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684258 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684266 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684275 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684284 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684293 4835 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684302 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684314 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684323 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684331 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684339 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684348 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684358 4835 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684366 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684374 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684383 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684391 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684401 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684409 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684417 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684424 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684434 4835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684443 4835 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684451 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684460 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684494 4835 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684503 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684512 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684521 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684530 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684539 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684548 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684557 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684565 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684573 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684582 4835 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684592 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684600 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684609 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684619 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684628 4835 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684637 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684645 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684654 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684663 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684673 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684682 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684691 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684699 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684710 4835 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684719 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684728 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684738 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684747 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684757 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684765 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684773 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684782 4835 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684791 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684801 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684810 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.684819 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.691813 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: E0129 15:28:38.694521 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.704245 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6mx\" (UniqueName: \"kubernetes.io/projected/bbf2aeec-7417-429a-8f23-3f341546d92f-kube-api-access-9r6mx\") pod \"node-resolver-s2qgg\" (UID: \"bbf2aeec-7417-429a-8f23-3f341546d92f\") " pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.706776 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.715591 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.726934 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.738881 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.740532 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.742712 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.743028 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.752545 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.752613 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: W0129 15:28:38.755320 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c6c6cc085252efc2f9b94c189c715c9c1889fb5f5ddf3f2136e011b55393a035 WatchSource:0}: Error finding container c6c6cc085252efc2f9b94c189c715c9c1889fb5f5ddf3f2136e011b55393a035: Status 404 returned error can't find the container with id c6c6cc085252efc2f9b94c189c715c9c1889fb5f5ddf3f2136e011b55393a035 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.758926 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 15:28:38 crc kubenswrapper[4835]: W0129 15:28:38.765349 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ae27f54fd91b51c6d8d0e19560eb74765b241253998902266762f0480179ad30 WatchSource:0}: Error finding container ae27f54fd91b51c6d8d0e19560eb74765b241253998902266762f0480179ad30: Status 404 returned error can't find the container with id ae27f54fd91b51c6d8d0e19560eb74765b241253998902266762f0480179ad30 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.765803 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.770853 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s2qgg" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.775290 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: W0129 15:28:38.775490 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2c7a03210c7cd6fd869b8b100c87082ae16115c637d95f4f02b44427893c9c87 WatchSource:0}: Error finding container 2c7a03210c7cd6fd869b8b100c87082ae16115c637d95f4f02b44427893c9c87: Status 404 returned error can't find the container with id 2c7a03210c7cd6fd869b8b100c87082ae16115c637d95f4f02b44427893c9c87 Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.786058 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82353ab9-3930-4794-8182-a1e0948778df-mcd-auth-proxy-config\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.786131 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/82353ab9-3930-4794-8182-a1e0948778df-rootfs\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.786154 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82353ab9-3930-4794-8182-a1e0948778df-proxy-tls\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.786182 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7tsl\" (UniqueName: \"kubernetes.io/projected/82353ab9-3930-4794-8182-a1e0948778df-kube-api-access-s7tsl\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.791256 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.791492 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.799250 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.823665 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.837371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.849148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.858732 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.870445 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.880784 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.887567 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82353ab9-3930-4794-8182-a1e0948778df-mcd-auth-proxy-config\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.887661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/82353ab9-3930-4794-8182-a1e0948778df-rootfs\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.887686 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82353ab9-3930-4794-8182-a1e0948778df-proxy-tls\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.887709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7tsl\" (UniqueName: \"kubernetes.io/projected/82353ab9-3930-4794-8182-a1e0948778df-kube-api-access-s7tsl\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.889216 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82353ab9-3930-4794-8182-a1e0948778df-mcd-auth-proxy-config\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.889289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/82353ab9-3930-4794-8182-a1e0948778df-rootfs\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.890693 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.893059 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.907054 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82353ab9-3930-4794-8182-a1e0948778df-proxy-tls\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.914659 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7tsl\" (UniqueName: \"kubernetes.io/projected/82353ab9-3930-4794-8182-a1e0948778df-kube-api-access-s7tsl\") pod \"machine-config-daemon-gpcs8\" (UID: \"82353ab9-3930-4794-8182-a1e0948778df\") " pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.922087 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.935046 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.945377 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.956898 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.965936 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.977057 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.986702 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:38 crc kubenswrapper[4835]: I0129 15:28:38.987762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:28:39 crc kubenswrapper[4835]: W0129 15:28:39.007332 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82353ab9_3930_4794_8182_a1e0948778df.slice/crio-f33cac7ac28af24402580bc32e6248825ebb291a54f764addc005df9685b17fd WatchSource:0}: Error finding container f33cac7ac28af24402580bc32e6248825ebb291a54f764addc005df9685b17fd: Status 404 returned error can't find the container with id f33cac7ac28af24402580bc32e6248825ebb291a54f764addc005df9685b17fd Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.042359 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-57czq"] Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.042962 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6sstt"] Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.043209 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.043656 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.046079 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.046234 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.046297 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.046234 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.046159 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.046109 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.051646 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.072060 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.087592 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.089029 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.089206 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:40.08915552 +0000 UTC m=+22.282199184 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.100229 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 15:23:38 +0000 UTC, rotation deadline is 2026-11-24 08:07:14.386045165 +0000 UTC Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.100383 4835 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7168h38m35.285668496s for next certificate rotation Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.105860 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.124511 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.150446 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.162032 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.179017 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.189356 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.189844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-os-release\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.189892 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9537ec35-484e-4eeb-b081-e453750fb39e-cni-binary-copy\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.189933 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-system-cni-dir\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.189955 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx82j\" (UniqueName: \"kubernetes.io/projected/49ee8e60-d5ca-491f-bb19-83286a79d337-kube-api-access-gx82j\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.189992 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-multus-certs\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-tuning-conf-dir\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190115 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-system-cni-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190141 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49ee8e60-d5ca-491f-bb19-83286a79d337-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190156 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-cni-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-cni-multus\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190189 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-etc-kubernetes\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190223 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190246 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190267 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190285 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-k8s-cni-cncf-io\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190306 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-netns\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190323 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-kubelet\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190355 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190359 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190378 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rpbn\" (UniqueName: \"kubernetes.io/projected/9537ec35-484e-4eeb-b081-e453750fb39e-kube-api-access-4rpbn\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190378 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190425 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:40.190400073 +0000 UTC m=+22.383443727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190448 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-socket-dir-parent\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190490 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:40.190451235 +0000 UTC m=+22.383494889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190516 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-conf-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190529 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190546 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190562 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190545 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-os-release\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190624 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:40.190599778 +0000 UTC m=+22.383643432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190652 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190680 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190698 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:39 crc kubenswrapper[4835]: E0129 15:28:39.190734 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:40.190724721 +0000 UTC m=+22.383768575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190655 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9537ec35-484e-4eeb-b081-e453750fb39e-multus-daemon-config\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190797 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/49ee8e60-d5ca-491f-bb19-83286a79d337-cni-binary-copy\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190825 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-cnibin\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-cni-bin\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190866 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-cnibin\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.190883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-hostroot\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.199596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.211722 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.221872 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.233239 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.244105 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.254988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.265443 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.273186 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.291885 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx82j\" (UniqueName: \"kubernetes.io/projected/49ee8e60-d5ca-491f-bb19-83286a79d337-kube-api-access-gx82j\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.291936 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-multus-certs\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.291957 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-system-cni-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.291978 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-tuning-conf-dir\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.291999 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49ee8e60-d5ca-491f-bb19-83286a79d337-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292020 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-cni-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292021 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-multus-certs\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292037 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-cni-multus\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292062 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-etc-kubernetes\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292095 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-k8s-cni-cncf-io\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292096 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-system-cni-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292142 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-netns\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292119 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-netns\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-cni-multus\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292206 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-etc-kubernetes\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292217 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-kubelet\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-run-k8s-cni-cncf-io\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292261 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rpbn\" (UniqueName: \"kubernetes.io/projected/9537ec35-484e-4eeb-b081-e453750fb39e-kube-api-access-4rpbn\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-socket-dir-parent\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292314 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-conf-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292323 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-kubelet\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292360 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-cni-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292378 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-os-release\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-os-release\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292413 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-socket-dir-parent\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292415 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-multus-conf-dir\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9537ec35-484e-4eeb-b081-e453750fb39e-multus-daemon-config\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292487 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/49ee8e60-d5ca-491f-bb19-83286a79d337-cni-binary-copy\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292511 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-cnibin\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292536 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-cni-bin\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292536 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-tuning-conf-dir\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-cnibin\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292607 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-hostroot\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-os-release\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292652 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9537ec35-484e-4eeb-b081-e453750fb39e-cni-binary-copy\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292654 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-hostroot\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-system-cni-dir\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-cnibin\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-cnibin\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292633 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9537ec35-484e-4eeb-b081-e453750fb39e-host-var-lib-cni-bin\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292779 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-os-release\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292808 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/49ee8e60-d5ca-491f-bb19-83286a79d337-system-cni-dir\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.292974 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/49ee8e60-d5ca-491f-bb19-83286a79d337-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.293116 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9537ec35-484e-4eeb-b081-e453750fb39e-multus-daemon-config\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.293291 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9537ec35-484e-4eeb-b081-e453750fb39e-cni-binary-copy\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.293326 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/49ee8e60-d5ca-491f-bb19-83286a79d337-cni-binary-copy\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.293950 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.331949 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx82j\" (UniqueName: \"kubernetes.io/projected/49ee8e60-d5ca-491f-bb19-83286a79d337-kube-api-access-gx82j\") pod \"multus-additional-cni-plugins-57czq\" (UID: \"49ee8e60-d5ca-491f-bb19-83286a79d337\") " pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.344273 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rpbn\" (UniqueName: \"kubernetes.io/projected/9537ec35-484e-4eeb-b081-e453750fb39e-kube-api-access-4rpbn\") pod \"multus-6sstt\" (UID: \"9537ec35-484e-4eeb-b081-e453750fb39e\") " pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.372353 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.386170 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6sstt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.388158 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-57czq" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.416917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.418409 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j9fl"] Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.419233 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.439204 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:21:38.298433077 +0000 UTC Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.442885 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.463058 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.483273 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494306 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494354 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-log-socket\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-ovn\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-ovn-kubernetes\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494533 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhxl\" (UniqueName: \"kubernetes.io/projected/06588e6b-ac15-40f5-ae5d-03f95146dc91-kube-api-access-lkhxl\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494565 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-etc-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494590 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-bin\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494643 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-slash\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-var-lib-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494697 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-node-log\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494774 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-script-lib\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494824 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-systemd-units\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494848 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-netns\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494866 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-env-overrides\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494921 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovn-node-metrics-cert\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.494965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-kubelet\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.495008 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-systemd\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.495026 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-netd\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.495047 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-config\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: W0129 15:28:39.497638 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9537ec35_484e_4eeb_b081_e453750fb39e.slice/crio-572574ede6b81c5f305c6781a93e8c432c75f93744a5a48dedeb9b470a604586 WatchSource:0}: Error finding container 572574ede6b81c5f305c6781a93e8c432c75f93744a5a48dedeb9b470a604586: Status 404 returned error can't find the container with id 572574ede6b81c5f305c6781a93e8c432c75f93744a5a48dedeb9b470a604586 Jan 29 15:28:39 crc kubenswrapper[4835]: W0129 15:28:39.499133 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ee8e60_d5ca_491f_bb19_83286a79d337.slice/crio-ccdc9ed1791baa1a3878be4d8541a81b83b309c35d36805be3e95c791003f611 WatchSource:0}: Error finding container ccdc9ed1791baa1a3878be4d8541a81b83b309c35d36805be3e95c791003f611: Status 404 returned error can't find the container with id ccdc9ed1791baa1a3878be4d8541a81b83b309c35d36805be3e95c791003f611 Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.502715 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.522918 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.543050 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.562854 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.593139 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596585 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-netns\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596645 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-env-overrides\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovn-node-metrics-cert\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596705 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-systemd-units\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596727 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-kubelet\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596750 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-systemd\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596770 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-netd\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596790 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-config\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596821 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-log-socket\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596834 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-netns\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596943 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-ovn\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.596873 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-ovn\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597011 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-ovn-kubernetes\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597041 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhxl\" (UniqueName: \"kubernetes.io/projected/06588e6b-ac15-40f5-ae5d-03f95146dc91-kube-api-access-lkhxl\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597089 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-etc-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597114 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-bin\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597147 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597186 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-slash\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597217 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-var-lib-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597251 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-node-log\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597309 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-script-lib\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597301 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-systemd-units\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597427 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-netd\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597483 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-ovn-kubernetes\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597591 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-env-overrides\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-node-log\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597769 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-etc-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597823 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-log-socket\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597916 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-bin\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597947 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597974 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-systemd\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-kubelet\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.597961 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-slash\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.598018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.598103 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-var-lib-openvswitch\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.598307 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-config\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.598584 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-script-lib\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.601338 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovn-node-metrics-cert\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.638630 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhxl\" (UniqueName: \"kubernetes.io/projected/06588e6b-ac15-40f5-ae5d-03f95146dc91-kube-api-access-lkhxl\") pod \"ovnkube-node-5j9fl\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.650227 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.650282 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.650294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2c7a03210c7cd6fd869b8b100c87082ae16115c637d95f4f02b44427893c9c87"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.655974 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.656197 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.656220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c6c6cc085252efc2f9b94c189c715c9c1889fb5f5ddf3f2136e011b55393a035"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.659514 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.659543 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.659556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"f33cac7ac28af24402580bc32e6248825ebb291a54f764addc005df9685b17fd"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.663552 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s2qgg" event={"ID":"bbf2aeec-7417-429a-8f23-3f341546d92f","Type":"ContainerStarted","Data":"c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.663638 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s2qgg" event={"ID":"bbf2aeec-7417-429a-8f23-3f341546d92f","Type":"ContainerStarted","Data":"be0df92ee0ed2fe8528e5a64378d593c97d25755b5ac422f0afc47d948e27f64"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.666068 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerStarted","Data":"ccdc9ed1791baa1a3878be4d8541a81b83b309c35d36805be3e95c791003f611"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.668545 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerStarted","Data":"93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.668574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerStarted","Data":"572574ede6b81c5f305c6781a93e8c432c75f93744a5a48dedeb9b470a604586"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.673714 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ae27f54fd91b51c6d8d0e19560eb74765b241253998902266762f0480179ad30"} Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.693194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.732036 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.734689 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.774121 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.814454 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.857539 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.893214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.935040 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:39 crc kubenswrapper[4835]: I0129 15:28:39.973674 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.022972 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.052903 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.096363 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.103568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.103761 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:42.103736991 +0000 UTC m=+24.296780665 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.132825 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.173709 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.206259 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.206318 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.206351 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.206380 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206441 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206565 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:42.206536583 +0000 UTC m=+24.399580397 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206588 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206613 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206651 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206669 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206619 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206697 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206590 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206745 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:42.206720107 +0000 UTC m=+24.399763781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206766 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:42.206758308 +0000 UTC m=+24.399801982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.206780 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:42.206773399 +0000 UTC m=+24.399817063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.213886 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.252388 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.288938 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.341806 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.374820 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.439547 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:07:04.039669373 +0000 UTC Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.492153 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.492511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.492780 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.492887 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.493160 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:40 crc kubenswrapper[4835]: E0129 15:28:40.493189 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.496655 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.497533 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.498744 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.499384 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.500428 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.501033 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.501720 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.502912 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.503741 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.504976 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.505674 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.507161 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.507939 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.508600 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.509827 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.513082 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.514293 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.514841 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.515680 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.516907 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.517596 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.519400 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.519981 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.521402 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.522252 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.523035 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.523998 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.524572 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.525247 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.526792 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.527517 4835 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.527818 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.530297 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.530867 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.531311 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.533315 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.535115 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.536823 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.538768 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.539818 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.540784 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.541407 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.542664 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.543982 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.544535 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.545294 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.546595 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.549492 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.550132 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.550909 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.552008 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.552784 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.553991 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.555126 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.679186 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579" exitCode=0 Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.679278 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579"} Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.679320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"f80b9071f79d9408916ddeb8d4f18495e394ef0e7c22a82860b4e75c86f7a385"} Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.681807 4835 generic.go:334] "Generic (PLEG): container finished" podID="49ee8e60-d5ca-491f-bb19-83286a79d337" containerID="ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127" exitCode=0 Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.681874 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerDied","Data":"ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127"} Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.729851 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.770450 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.794914 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.822571 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.843148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.862205 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.886421 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.907996 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.923740 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.935430 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.959682 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.976001 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:40 crc kubenswrapper[4835]: I0129 15:28:40.997095 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.015151 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.028100 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.043021 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.057558 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.095701 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.140086 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.171765 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.219668 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.255778 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.300910 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.333197 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.372432 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.414444 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.441585 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:24:12.69493448 +0000 UTC Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.458275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.494006 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.668105 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fpjxf"] Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.668522 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.671161 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.671627 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.672167 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.672648 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.689308 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.691740 4835 generic.go:334] "Generic (PLEG): container finished" podID="49ee8e60-d5ca-491f-bb19-83286a79d337" containerID="7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065" exitCode=0 Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.691834 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerDied","Data":"7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.696681 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.696735 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.696753 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.696766 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.696781 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.696792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf"} Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.702243 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.727059 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.744781 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.784091 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.813756 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.823805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-serviceca\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.823886 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-host\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.823974 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdz77\" (UniqueName: \"kubernetes.io/projected/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-kube-api-access-sdz77\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.851369 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.892277 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.925646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-host\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.925695 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdz77\" (UniqueName: \"kubernetes.io/projected/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-kube-api-access-sdz77\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.925776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-serviceca\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.925844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-host\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.927145 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-serviceca\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.942497 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.967437 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdz77\" (UniqueName: \"kubernetes.io/projected/559426b6-ff0a-4f7a-a9e1-de4c69ef5d34-kube-api-access-sdz77\") pod \"node-ca-fpjxf\" (UID: \"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\") " pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:41 crc kubenswrapper[4835]: I0129 15:28:41.991690 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fpjxf" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.005697 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: W0129 15:28:42.023020 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559426b6_ff0a_4f7a_a9e1_de4c69ef5d34.slice/crio-8ac54dc14fd1c80d0f70b4e532a4318f2033ed5b8edd54ff552557e2466a5ea6 WatchSource:0}: Error finding container 8ac54dc14fd1c80d0f70b4e532a4318f2033ed5b8edd54ff552557e2466a5ea6: Status 404 returned error can't find the container with id 8ac54dc14fd1c80d0f70b4e532a4318f2033ed5b8edd54ff552557e2466a5ea6 Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.040670 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.084222 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.115577 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.127984 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.128266 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:46.128221919 +0000 UTC m=+28.321265573 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.150239 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.194141 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.228907 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.228957 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.228984 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.229024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229139 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229199 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229230 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229227 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229246 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229245 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:46.229220366 +0000 UTC m=+28.422264020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229199 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229439 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:46.229409901 +0000 UTC m=+28.422453705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229447 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229458 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229484 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:46.229452172 +0000 UTC m=+28.422496006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.229541 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:46.229518303 +0000 UTC m=+28.422562127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.231424 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.274777 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.312282 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.355531 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.395271 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.439778 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.442270 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:32:07.167808954 +0000 UTC Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.473787 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.491340 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.491424 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.491357 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.491594 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.491823 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:42 crc kubenswrapper[4835]: E0129 15:28:42.491865 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.516051 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.557877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.597410 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.633150 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.688521 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.703180 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22"} Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.705704 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fpjxf" event={"ID":"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34","Type":"ContainerStarted","Data":"990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5"} Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.705767 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fpjxf" event={"ID":"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34","Type":"ContainerStarted","Data":"8ac54dc14fd1c80d0f70b4e532a4318f2033ed5b8edd54ff552557e2466a5ea6"} Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.709412 4835 generic.go:334] "Generic (PLEG): container finished" podID="49ee8e60-d5ca-491f-bb19-83286a79d337" containerID="f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583" exitCode=0 Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.709501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerDied","Data":"f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583"} Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.732635 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.765123 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.802966 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.840500 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.883421 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.913882 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.955917 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:42 crc kubenswrapper[4835]: I0129 15:28:42.995345 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:42Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.035871 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.076410 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.116434 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.154891 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.194633 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.232980 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.281093 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.319151 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.354813 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.398694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.443433 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:41:43.77307352 +0000 UTC Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.719739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c"} Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.724286 4835 generic.go:334] "Generic (PLEG): container finished" podID="49ee8e60-d5ca-491f-bb19-83286a79d337" containerID="20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f" exitCode=0 Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.724362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerDied","Data":"20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f"} Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.750392 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.767223 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.788769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.808282 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.828804 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.846032 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.863263 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.879730 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.904767 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.921323 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.934346 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.949580 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.964929 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:43 crc kubenswrapper[4835]: I0129 15:28:43.982000 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.001933 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:43Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.444298 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 17:03:54.570907464 +0000 UTC Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.450333 4835 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.452511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.452543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.452560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.452695 4835 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.461771 4835 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.462118 4835 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.463396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.463461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.463517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.463549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.463567 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.479056 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.483505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.483567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.483586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.483609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.483624 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.491815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.491979 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.492082 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.492141 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.492186 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.492237 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.503989 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.510090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.510144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.510155 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.510176 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.510217 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.528344 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.533502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.533565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.533585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.533611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.533629 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.549794 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.555639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.555699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.555710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.555729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.555741 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.569608 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: E0129 15:28:44.569759 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.571858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.571911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.571927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.571954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.571975 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.675032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.675092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.675108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.675131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.675145 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.733023 4835 generic.go:334] "Generic (PLEG): container finished" podID="49ee8e60-d5ca-491f-bb19-83286a79d337" containerID="d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16" exitCode=0 Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.733076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerDied","Data":"d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16"} Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.756396 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.772899 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.778285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.778449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.778594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.778751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.778853 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.791630 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.809538 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.823839 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.837485 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.850769 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.862170 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.881551 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.883435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.883506 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.883521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.883542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.883556 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.902148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.917442 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.932987 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.948158 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.959351 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.982282 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:44Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.986452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.986583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.986681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.986801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:44 crc kubenswrapper[4835]: I0129 15:28:44.986875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:44Z","lastTransitionTime":"2026-01-29T15:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.089564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.089611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.089620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.089636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.089648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.193040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.193111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.193130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.193159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.193181 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.296868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.296929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.296946 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.296970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.296988 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.400885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.400935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.400949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.400968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.400980 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.445125 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:09:32.11495058 +0000 UTC Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.504956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.505031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.505052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.505081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.505099 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.607819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.607873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.607888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.607911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.607927 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.713243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.713290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.713301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.713321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.713341 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.744988 4835 generic.go:334] "Generic (PLEG): container finished" podID="49ee8e60-d5ca-491f-bb19-83286a79d337" containerID="8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374" exitCode=0 Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.745291 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerDied","Data":"8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.769087 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.782905 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.806184 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.816754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.816825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.816848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.816881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.816901 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.827232 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.842741 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.861734 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.878545 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.892270 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.906589 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.922305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.922501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.922523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.922546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.922560 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:45Z","lastTransitionTime":"2026-01-29T15:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.923568 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.939188 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.960372 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.975602 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:45 crc kubenswrapper[4835]: I0129 15:28:45.995083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.009314 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.025765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.025803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.025813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.025829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.025841 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.130266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.130336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.130353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.130379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.130398 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.174563 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.174809 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:28:54.174769544 +0000 UTC m=+36.367813198 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.233699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.233759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.233776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.233804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.233828 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.276459 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.276542 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.276578 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.276604 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276700 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276795 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276823 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276838 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276842 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276859 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:54.276823767 +0000 UTC m=+36.469867461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.277016 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:54.276984361 +0000 UTC m=+36.470028225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.277031 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:54.277023622 +0000 UTC m=+36.470067506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.276795 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.277062 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.277077 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.277112 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:28:54.277103644 +0000 UTC m=+36.470147538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.336617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.336698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.336711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.336733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.336745 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.439589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.439630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.439639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.439655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.439666 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.446057 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:00:40.847089566 +0000 UTC Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.491862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.491979 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.492080 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.491905 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.492215 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:46 crc kubenswrapper[4835]: E0129 15:28:46.492369 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.545756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.545824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.545842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.545870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.545888 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.649206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.649274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.649290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.649316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.649334 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.709176 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.751509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.751564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.751584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.751610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.751626 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.755567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.756076 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.764052 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" event={"ID":"49ee8e60-d5ca-491f-bb19-83286a79d337","Type":"ContainerStarted","Data":"78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.781535 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.792611 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.796431 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.816754 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.833870 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.847272 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.854994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.855035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.855046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.855063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.855074 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.864439 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.885828 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.913761 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.942025 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.957878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.957915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.957925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.957940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.957958 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:46Z","lastTransitionTime":"2026-01-29T15:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.961631 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.973536 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:46 crc kubenswrapper[4835]: I0129 15:28:46.992275 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.012913 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.024694 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.038200 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.053131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.060875 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.060906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.060917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.060935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.060947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.069143 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.081529 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.098131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.109302 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.128629 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.143825 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.163289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.163327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.163336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.163351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.163360 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.163654 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.177748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.193743 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.209159 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.225916 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.243710 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.261377 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.266256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.266291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.266300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.266316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.266327 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.275991 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.370184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.370244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.370258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.370281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.370306 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.446695 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:31:59.008800607 +0000 UTC Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.473553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.473604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.473617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.473635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.473653 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.576645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.576703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.576716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.576739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.576752 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.680141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.680195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.680205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.680221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.680236 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.749678 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.767218 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.773663 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.784692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.784772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.784794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.784825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.784848 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.798838 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.815404 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.830230 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.849381 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.871634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.887704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.887746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.887756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.887774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.887784 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.888166 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.905748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.924834 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.938561 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.953981 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.974452 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:47Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.991006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.991062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.991076 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.991092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:47 crc kubenswrapper[4835]: I0129 15:28:47.991107 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:47Z","lastTransitionTime":"2026-01-29T15:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.006334 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.025608 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.042994 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.064511 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.076171 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.094596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.094914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.094948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.094961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.094985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.095000 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.198324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.198394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.198411 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.198435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.198456 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.302448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.302560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.302579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.302612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.302632 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.405900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.405951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.405968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.405993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.406005 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.447810 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:52:32.19696003 +0000 UTC Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.491274 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.491284 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:48 crc kubenswrapper[4835]: E0129 15:28:48.491431 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:48 crc kubenswrapper[4835]: E0129 15:28:48.491636 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.491766 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:48 crc kubenswrapper[4835]: E0129 15:28:48.491837 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.510845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.510892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.510906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.510928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.510940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.534128 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.557587 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.578577 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.596762 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.613277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.613321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.613332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.613372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.613387 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.614358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.634084 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.649498 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.662661 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.679251 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.692719 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.707796 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.715955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.715999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.716009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.716029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.716039 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.723714 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.738179 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.754843 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.770432 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.773989 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.818665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.818725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.818737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.818756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.818780 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.921237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.921281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.921292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.921332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:48 crc kubenswrapper[4835]: I0129 15:28:48.921343 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:48Z","lastTransitionTime":"2026-01-29T15:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.024061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.024108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.024117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.024140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.024152 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.126830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.126884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.126898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.126916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.126931 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.229684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.229746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.229757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.229782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.229796 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.332131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.332173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.332182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.332201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.332211 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.434763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.434821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.434833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.434856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.434869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.448507 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:54:37.922928538 +0000 UTC Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.544327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.544414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.544440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.544501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.544528 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.647562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.647617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.647628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.647647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.647660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.751652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.751705 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.751717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.751739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.751750 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.776946 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/0.log" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.781601 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f" exitCode=1 Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.781657 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.782544 4835 scope.go:117] "RemoveContainer" containerID="2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.809708 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.829134 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.845005 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.855500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.855552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.855566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.855591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.855605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.867962 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.886414 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.909315 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.931241 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.947396 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.953757 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.959912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.959961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.959973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.960004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.960019 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:49Z","lastTransitionTime":"2026-01-29T15:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.961515 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.981037 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:49 crc kubenswrapper[4835]: I0129 15:28:49.996030 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:49Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.019339 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"message\\\":\\\" 6113 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158016 6113 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158353 6113 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 15:28:49.158456 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 15:28:49.158509 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:49.158514 6113 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:49.158523 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:49.158547 6113 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:49.158594 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:49.158608 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:28:49.158622 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:49.158630 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:28:49.158677 6113 factory.go:656] Stopping watch factory\\\\nI0129 15:28:49.158703 6113 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:28:49.158684 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.051927 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.064043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.064107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.064122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.064152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.064170 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.070399 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.090989 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.105317 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.118696 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.133132 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.150298 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.166121 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.167593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.167670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.167685 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.167714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.167725 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.179988 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.190676 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.203395 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.214149 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.241395 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"message\\\":\\\" 6113 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158016 6113 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158353 6113 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 15:28:49.158456 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 15:28:49.158509 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:49.158514 6113 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:49.158523 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:49.158547 6113 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:49.158594 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:49.158608 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:28:49.158622 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:49.158630 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:28:49.158677 6113 factory.go:656] Stopping watch factory\\\\nI0129 15:28:49.158703 6113 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:28:49.158684 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.259063 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.271501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.271564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.271581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.271606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.271626 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.276432 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.297837 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.323524 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.349124 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:50Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.375338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.375408 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.375424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.375445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.375458 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.448826 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 08:11:30.68107841 +0000 UTC Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.479063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.479316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.479334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.479362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.479382 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.491843 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.491954 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:50 crc kubenswrapper[4835]: E0129 15:28:50.492022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.492074 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:50 crc kubenswrapper[4835]: E0129 15:28:50.492192 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:50 crc kubenswrapper[4835]: E0129 15:28:50.492401 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.582397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.582450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.582483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.582505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.582517 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.685562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.685639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.685660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.685691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.685715 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.788274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.788329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.788340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.788360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.788373 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.890589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.890666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.890686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.890720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.890745 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.994016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.994090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.994104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.994123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:50 crc kubenswrapper[4835]: I0129 15:28:50.994139 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:50Z","lastTransitionTime":"2026-01-29T15:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.097825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.097897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.097915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.097947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.097966 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.202256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.202306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.202318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.202340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.202358 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.306358 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.306438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.306448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.306487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.306499 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.411006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.411058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.411071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.411093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.411110 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.449333 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:39:01.780369139 +0000 UTC Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.515110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.515359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.515377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.515406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.515428 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.550147 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv"] Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.551162 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.554628 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.554888 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.579552 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.602279 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.618773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.618830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.618849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.618876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.618896 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.625906 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.648163 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.670014 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.692689 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.722709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.722766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.722780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.722800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.722813 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.726777 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"message\\\":\\\" 6113 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158016 6113 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158353 6113 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 15:28:49.158456 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 15:28:49.158509 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:49.158514 6113 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:49.158523 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:49.158547 6113 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:49.158594 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:49.158608 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:28:49.158622 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:49.158630 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:28:49.158677 6113 factory.go:656] Stopping watch factory\\\\nI0129 15:28:49.158703 6113 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:28:49.158684 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.736668 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42364f95-2afe-46fc-8528-19b8dfcac6ac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.736715 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dws6\" (UniqueName: \"kubernetes.io/projected/42364f95-2afe-46fc-8528-19b8dfcac6ac-kube-api-access-8dws6\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.736745 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42364f95-2afe-46fc-8528-19b8dfcac6ac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.736831 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42364f95-2afe-46fc-8528-19b8dfcac6ac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.745890 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.769126 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.792514 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.797284 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/0.log" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.801642 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.801839 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.817017 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.825438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.825514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.825527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.825550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.825567 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.836001 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.838441 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42364f95-2afe-46fc-8528-19b8dfcac6ac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.838522 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dws6\" (UniqueName: \"kubernetes.io/projected/42364f95-2afe-46fc-8528-19b8dfcac6ac-kube-api-access-8dws6\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.838551 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42364f95-2afe-46fc-8528-19b8dfcac6ac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.838572 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42364f95-2afe-46fc-8528-19b8dfcac6ac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.839413 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42364f95-2afe-46fc-8528-19b8dfcac6ac-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.839681 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42364f95-2afe-46fc-8528-19b8dfcac6ac-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.845730 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42364f95-2afe-46fc-8528-19b8dfcac6ac-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.851768 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.859340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dws6\" (UniqueName: \"kubernetes.io/projected/42364f95-2afe-46fc-8528-19b8dfcac6ac-kube-api-access-8dws6\") pod \"ovnkube-control-plane-749d76644c-pmdjv\" (UID: \"42364f95-2afe-46fc-8528-19b8dfcac6ac\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.867087 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.872303 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.883375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.898610 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.919734 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.928877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.928920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.928930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.928948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.928961 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:51Z","lastTransitionTime":"2026-01-29T15:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.948634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.976307 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:51 crc kubenswrapper[4835]: I0129 15:28:51.996489 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:51Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.023630 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.031668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.031709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.031720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.031737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.031747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.037770 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.066510 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"message\\\":\\\" 6113 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158016 6113 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158353 6113 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 15:28:49.158456 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 15:28:49.158509 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:49.158514 6113 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:49.158523 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:49.158547 6113 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:49.158594 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:49.158608 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:28:49.158622 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:49.158630 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:28:49.158677 6113 factory.go:656] Stopping watch factory\\\\nI0129 15:28:49.158703 6113 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:28:49.158684 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: W0129 15:28:52.068506 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42364f95_2afe_46fc_8528_19b8dfcac6ac.slice/crio-722b68ea13d8cf8a5f7cdc884325571199303c66e6456d0f290b95be6db3b488 WatchSource:0}: Error finding container 722b68ea13d8cf8a5f7cdc884325571199303c66e6456d0f290b95be6db3b488: Status 404 returned error can't find the container with id 722b68ea13d8cf8a5f7cdc884325571199303c66e6456d0f290b95be6db3b488 Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.089244 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.109131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.128870 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.134692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.134748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.134759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.134780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.134801 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.154306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.171642 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.184728 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.200002 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.211612 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.224238 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.237097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.237140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.237151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.237169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.237182 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.339615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.339676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.339690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.339709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.340032 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.443216 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.443273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.443288 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.443521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.443535 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.449413 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:11:12.068537053 +0000 UTC Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.491254 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.491332 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.491455 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.491588 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.491680 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.491778 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.547204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.547237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.547246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.547263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.547272 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.649416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.649480 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.649491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.649512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.649525 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.743514 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-p2hj4"] Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.744057 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.744138 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.752030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.752081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.752096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.752118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.752134 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.758290 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.768440 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.782744 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.797956 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.807070 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" event={"ID":"42364f95-2afe-46fc-8528-19b8dfcac6ac","Type":"ContainerStarted","Data":"0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.807131 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" event={"ID":"42364f95-2afe-46fc-8528-19b8dfcac6ac","Type":"ContainerStarted","Data":"f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.807148 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" event={"ID":"42364f95-2afe-46fc-8528-19b8dfcac6ac","Type":"ContainerStarted","Data":"722b68ea13d8cf8a5f7cdc884325571199303c66e6456d0f290b95be6db3b488"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.809542 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/1.log" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.810297 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/0.log" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.813431 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.814037 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1" exitCode=1 Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.814096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.814182 4835 scope.go:117] "RemoveContainer" containerID="2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.814783 4835 scope.go:117] "RemoveContainer" containerID="6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1" Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.814972 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.828492 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.842585 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.848131 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.848243 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twvz4\" (UniqueName: \"kubernetes.io/projected/96bbc399-9838-4fd5-bf1e-66db67707c5c-kube-api-access-twvz4\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.855259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.855334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.855353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.855418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.855439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.862136 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"message\\\":\\\" 6113 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158016 6113 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158353 6113 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 15:28:49.158456 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 15:28:49.158509 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:49.158514 6113 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:49.158523 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:49.158547 6113 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:49.158594 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:49.158608 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:28:49.158622 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:49.158630 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:28:49.158677 6113 factory.go:656] Stopping watch factory\\\\nI0129 15:28:49.158703 6113 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:28:49.158684 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.879562 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.894258 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.920001 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.933493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.948082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.948846 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.948931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twvz4\" (UniqueName: \"kubernetes.io/projected/96bbc399-9838-4fd5-bf1e-66db67707c5c-kube-api-access-twvz4\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.949206 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:52 crc kubenswrapper[4835]: E0129 15:28:52.949358 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:28:53.449317776 +0000 UTC m=+35.642361450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.958658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.959011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.959047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.959106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.959141 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:52Z","lastTransitionTime":"2026-01-29T15:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.967031 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.977143 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twvz4\" (UniqueName: \"kubernetes.io/projected/96bbc399-9838-4fd5-bf1e-66db67707c5c-kube-api-access-twvz4\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.979502 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:52 crc kubenswrapper[4835]: I0129 15:28:52.993784 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:52Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.006615 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.018772 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.045716 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c73aef8392c2813bbbbc93444c70e86f844761b030b168b4d10c53bac64a64f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"message\\\":\\\" 6113 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158016 6113 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:28:49.158353 6113 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 15:28:49.158456 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0129 15:28:49.158509 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0129 15:28:49.158514 6113 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0129 15:28:49.158523 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0129 15:28:49.158547 6113 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0129 15:28:49.158594 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 15:28:49.158608 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 15:28:49.158622 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 15:28:49.158630 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:28:49.158677 6113 factory.go:656] Stopping watch factory\\\\nI0129 15:28:49.158703 6113 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:28:49.158684 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:52.526228 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:52.526271 6257 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.061527 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.063411 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.063507 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.063526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.063553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.063579 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.090004 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.111791 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.135064 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.151762 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.167563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.167630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.167644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.167665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.167679 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.170568 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.192594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.212704 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.232278 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.249224 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.270685 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.271162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.271259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.271274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.271299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.271313 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.290594 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.307765 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.326431 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.353152 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.374741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.374797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.374809 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.374827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.374841 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.449542 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:34:19.563204989 +0000 UTC Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.454227 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:53 crc kubenswrapper[4835]: E0129 15:28:53.454360 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:53 crc kubenswrapper[4835]: E0129 15:28:53.454421 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:28:54.454403912 +0000 UTC m=+36.647447566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.477458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.477525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.477534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.477549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.477558 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.580975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.581232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.581265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.581299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.581326 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.684962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.685031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.685055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.685086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.685106 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.788551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.788613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.788630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.788660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.788679 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.821232 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/1.log" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.826359 4835 scope.go:117] "RemoveContainer" containerID="6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1" Jan 29 15:28:53 crc kubenswrapper[4835]: E0129 15:28:53.826650 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.846821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.866054 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.890879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.890943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.890965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.890989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.891005 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.897148 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:52.526228 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:52.526271 6257 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.929204 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.947671 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.970493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.984010 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.994376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.994438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.994463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.994509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.994526 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:53Z","lastTransitionTime":"2026-01-29T15:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:53 crc kubenswrapper[4835]: I0129 15:28:53.996626 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:53Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.014223 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.029790 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.045541 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.062772 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.080809 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.097779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.097825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.097834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.097853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.097864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.100344 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.116145 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.131730 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.147672 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.200986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.201035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.201044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.201062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.201073 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.262723 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.262920 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:29:10.262890668 +0000 UTC m=+52.455934332 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.304693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.304750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.304761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.304779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.304793 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.364576 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.364788 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364821 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364875 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364896 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364967 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364985 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364999 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.364985 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:10.364958702 +0000 UTC m=+52.558002366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.365065 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:10.365048944 +0000 UTC m=+52.558092598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.365106 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.365125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.365192 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.365214 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:10.365207728 +0000 UTC m=+52.558251382 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.365345 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.365661 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:10.365623998 +0000 UTC m=+52.558667692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.407514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.407568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.407584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.407608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.407652 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.450592 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:10:35.409797455 +0000 UTC Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.465965 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.466205 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.466324 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:28:56.466295187 +0000 UTC m=+38.659338881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.491875 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.491957 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.491972 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.491906 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.492235 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.492631 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.492898 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.492977 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.510608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.510669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.510687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.510712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.510729 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.604044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.604112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.604130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.604159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.604176 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.626666 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.632571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.632634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.632650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.632673 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.632691 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.647673 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.653080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.653149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.653169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.653198 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.653217 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.672148 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.677410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.677499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.677511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.677552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.677567 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.694267 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.699713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.699793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.699806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.699826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.699839 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.715116 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:54Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:54 crc kubenswrapper[4835]: E0129 15:28:54.715255 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.717070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.717118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.717132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.717153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.717165 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.820445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.820558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.820584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.820618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.820644 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.923656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.923708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.923723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.923744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:54 crc kubenswrapper[4835]: I0129 15:28:54.923759 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:54Z","lastTransitionTime":"2026-01-29T15:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.027131 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.027190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.027202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.027220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.027232 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.130708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.130759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.130772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.130789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.130799 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.234147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.234225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.234245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.234271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.234289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.337613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.337675 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.337686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.337706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.337720 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.440837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.440911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.440928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.440962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.440987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.451317 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:27:39.212727425 +0000 UTC Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.544413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.544482 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.544496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.544516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.544528 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.647485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.647541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.647554 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.647576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.647589 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.751608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.751664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.751677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.751701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.751717 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.854307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.854355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.854365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.854385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.854397 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.956982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.957035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.957047 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.957066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:55 crc kubenswrapper[4835]: I0129 15:28:55.957078 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:55Z","lastTransitionTime":"2026-01-29T15:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.059999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.060127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.060149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.060171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.060188 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.163082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.163141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.163153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.163174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.163188 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.265780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.265834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.265844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.265862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.265875 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.369052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.369099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.369114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.369163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.369176 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.452235 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:43:35.655705333 +0000 UTC Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.471355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.471416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.471427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.471450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.471497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.490224 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:56 crc kubenswrapper[4835]: E0129 15:28:56.490423 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:56 crc kubenswrapper[4835]: E0129 15:28:56.490552 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:29:00.490516888 +0000 UTC m=+42.683560572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.491122 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.491161 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.491141 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.491124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:56 crc kubenswrapper[4835]: E0129 15:28:56.491287 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:56 crc kubenswrapper[4835]: E0129 15:28:56.491395 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:56 crc kubenswrapper[4835]: E0129 15:28:56.491603 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:28:56 crc kubenswrapper[4835]: E0129 15:28:56.491749 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.574983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.575028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.575040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.575058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.575072 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.678629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.678717 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.678731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.678752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.678778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.782157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.782228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.782244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.782278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.782299 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.885265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.885332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.885349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.885372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.885391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.988206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.988247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.988259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.988278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:56 crc kubenswrapper[4835]: I0129 15:28:56.988294 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:56Z","lastTransitionTime":"2026-01-29T15:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.091240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.091301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.091321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.091346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.091364 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.194279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.194319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.194328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.194344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.194354 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.214587 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.215738 4835 scope.go:117] "RemoveContainer" containerID="6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1" Jan 29 15:28:57 crc kubenswrapper[4835]: E0129 15:28:57.215946 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.297922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.297985 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.298002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.298029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.298049 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.400573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.400665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.400679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.400697 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.400709 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.452366 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:01:05.957980223 +0000 UTC Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.503160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.503213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.503221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.503234 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.503244 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.606031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.606077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.606086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.606107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.606117 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.709262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.709319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.709333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.709356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.709370 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.813051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.813139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.813162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.813194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.813218 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.916774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.916843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.916860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.916888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:57 crc kubenswrapper[4835]: I0129 15:28:57.916911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:57Z","lastTransitionTime":"2026-01-29T15:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.020440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.020574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.020596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.020625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.020649 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.123171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.123248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.123260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.123279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.123291 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.226447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.226561 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.226612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.226640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.226656 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.330373 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.330701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.330758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.330788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.330801 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.433560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.433598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.433607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.433623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.433633 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.453321 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:34:39.448149065 +0000 UTC Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.491928 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.491928 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.491974 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.492076 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:28:58 crc kubenswrapper[4835]: E0129 15:28:58.492224 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:28:58 crc kubenswrapper[4835]: E0129 15:28:58.492376 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:28:58 crc kubenswrapper[4835]: E0129 15:28:58.492511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:28:58 crc kubenswrapper[4835]: E0129 15:28:58.492606 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.512371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.530144 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.536205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.536254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.536266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.536287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.536351 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.546317 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.571496 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.592279 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.622942 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:52.526228 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:52.526271 6257 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.640443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.640495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.640507 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.640523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.640533 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.641786 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.655563 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.672908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.694987 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.710821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.735269 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.743031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.743074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.743088 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.743109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.743126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.755061 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.772117 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.784745 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.799036 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.813331 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:28:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.844747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.844793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.844805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.844822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.844835 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.947766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.947818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.947829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.947847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:58 crc kubenswrapper[4835]: I0129 15:28:58.947861 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:58Z","lastTransitionTime":"2026-01-29T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.050927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.050995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.051036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.051073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.051096 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.154173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.154245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.154264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.154292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.154312 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.257686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.257746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.257758 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.257786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.257803 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.361577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.361631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.361653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.361673 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.361686 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.454265 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:27:00.272838876 +0000 UTC Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.464789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.464860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.464874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.464897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.464911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.568942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.568987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.568999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.569020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.569035 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.671666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.671716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.671724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.671742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.671756 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.774193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.774239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.774249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.774267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.774278 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.877329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.877393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.877407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.877431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.877446 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.980782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.981160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.981266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.981364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:28:59 crc kubenswrapper[4835]: I0129 15:28:59.981458 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:28:59Z","lastTransitionTime":"2026-01-29T15:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.084958 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.085312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.085402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.085519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.085618 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.189010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.189385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.189530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.189647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.189748 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.292995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.293340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.293429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.293556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.293657 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.396518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.396586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.396604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.396631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.396649 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.454622 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:53:21.829156513 +0000 UTC Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.491302 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:00 crc kubenswrapper[4835]: E0129 15:29:00.491771 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.491379 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:00 crc kubenswrapper[4835]: E0129 15:29:00.492257 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.491303 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:00 crc kubenswrapper[4835]: E0129 15:29:00.492608 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.491383 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:00 crc kubenswrapper[4835]: E0129 15:29:00.493514 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.499156 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.499491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.499600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.499708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.500344 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.536850 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:00 crc kubenswrapper[4835]: E0129 15:29:00.537070 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:00 crc kubenswrapper[4835]: E0129 15:29:00.537160 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:29:08.537136676 +0000 UTC m=+50.730180340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.604185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.604233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.604244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.604266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.604282 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.707870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.707910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.707918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.707931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.707940 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.811412 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.811493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.811506 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.811526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.811539 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.914793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.914874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.914896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.914932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:00 crc kubenswrapper[4835]: I0129 15:29:00.914956 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:00Z","lastTransitionTime":"2026-01-29T15:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.017800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.017874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.017900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.017934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.017963 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.120860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.120933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.120957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.120991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.121016 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.224442 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.224516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.224525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.224544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.224555 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.327597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.327663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.327681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.327709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.327729 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.431369 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.431422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.431431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.431452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.431480 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.454869 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:08:42.902296214 +0000 UTC Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.535390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.535511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.535537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.535583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.535607 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.639356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.639414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.639430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.639508 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.639547 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.742707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.742763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.742775 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.742794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.742810 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.845980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.846059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.846078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.846108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.846124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.949742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.949812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.949827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.949853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:01 crc kubenswrapper[4835]: I0129 15:29:01.949869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:01Z","lastTransitionTime":"2026-01-29T15:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.052804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.052865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.052882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.052909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.052928 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.156535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.156608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.156621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.156643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.156656 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.260308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.260382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.260407 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.260450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.260522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.364023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.364084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.364096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.364116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.364127 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.456003 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:20:05.187877867 +0000 UTC Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.468315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.468382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.468396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.468421 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.468440 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.491641 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.491720 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.491777 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.491952 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:02 crc kubenswrapper[4835]: E0129 15:29:02.491895 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:02 crc kubenswrapper[4835]: E0129 15:29:02.492174 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:02 crc kubenswrapper[4835]: E0129 15:29:02.492293 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:02 crc kubenswrapper[4835]: E0129 15:29:02.492458 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.572188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.572244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.572256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.572276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.572291 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.676068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.676140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.676159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.676186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.676205 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.779793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.779865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.779881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.779909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.779928 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.883580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.883632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.883649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.883671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.883690 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.987626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.987681 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.987695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.987714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:02 crc kubenswrapper[4835]: I0129 15:29:02.987725 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:02Z","lastTransitionTime":"2026-01-29T15:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.091302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.091404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.091429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.091462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.091527 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.194384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.194462 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.194516 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.194545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.194563 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.297257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.297317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.297333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.297354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.297368 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.400190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.400280 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.400307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.400341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.400364 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.457224 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:45:20.818621964 +0000 UTC Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.503580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.503642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.503657 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.503676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.503689 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.606632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.606877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.606898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.606922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.606937 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.709351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.709401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.709414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.709431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.709443 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.811987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.812052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.812062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.812078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.812087 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.914688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.914747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.914760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.914778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:03 crc kubenswrapper[4835]: I0129 15:29:03.914791 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:03Z","lastTransitionTime":"2026-01-29T15:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.021266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.021347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.021361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.021378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.021390 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.123676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.123711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.123721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.123737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.123747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.227428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.227547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.227558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.227576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.227586 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.331104 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.331165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.331184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.331208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.331230 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.434643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.434702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.434718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.434746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.434764 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.458165 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:34:40.985299311 +0000 UTC Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.491876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.492040 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.491950 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.491939 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:04 crc kubenswrapper[4835]: E0129 15:29:04.492236 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:04 crc kubenswrapper[4835]: E0129 15:29:04.492394 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:04 crc kubenswrapper[4835]: E0129 15:29:04.492563 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:04 crc kubenswrapper[4835]: E0129 15:29:04.492640 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.542886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.542963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.542975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.542995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.543010 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.645744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.645788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.645796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.645813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.645821 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.749338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.749429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.749450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.749524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.749561 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.853380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.853432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.853444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.853463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.853496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.956517 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.956588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.956627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.956659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:04 crc kubenswrapper[4835]: I0129 15:29:04.956682 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:04Z","lastTransitionTime":"2026-01-29T15:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.016841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.016893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.016904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.016927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.016938 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: E0129 15:29:05.031942 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.036081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.036123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.036135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.036152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.036161 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: E0129 15:29:05.047432 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.051085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.051124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.051135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.051151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.051164 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: E0129 15:29:05.065741 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.072671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.072739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.072757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.072782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.072799 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: E0129 15:29:05.091911 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.097285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.097333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.097342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.097363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.097374 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: E0129 15:29:05.110497 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:05Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:05 crc kubenswrapper[4835]: E0129 15:29:05.110607 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.112296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.112348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.112362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.112388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.112403 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.214596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.214642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.214653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.214671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.214683 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.317745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.317825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.317842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.317865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.317879 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.420710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.420759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.420770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.420788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.420800 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.458460 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:14:45.77987689 +0000 UTC Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.523541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.523584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.523597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.523615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.523625 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.626839 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.626892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.626904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.626922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.626933 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.730631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.730676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.730689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.730704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.730712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.834568 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.834637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.834653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.834679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.834696 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.938722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.939002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.939152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.939255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:05 crc kubenswrapper[4835]: I0129 15:29:05.939334 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:05Z","lastTransitionTime":"2026-01-29T15:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.042589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.042872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.043028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.043172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.043303 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.145799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.145859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.145871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.145945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.145963 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.253099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.253167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.253177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.253216 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.253227 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.356424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.356484 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.356493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.356511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.356522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.459068 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:16:10.351261709 +0000 UTC Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.459923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.460033 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.460052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.460086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.460104 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.491815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.491932 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.491815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.491815 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:06 crc kubenswrapper[4835]: E0129 15:29:06.492046 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:06 crc kubenswrapper[4835]: E0129 15:29:06.492151 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:06 crc kubenswrapper[4835]: E0129 15:29:06.492206 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:06 crc kubenswrapper[4835]: E0129 15:29:06.492279 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.562458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.562762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.562835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.562921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.562987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.664994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.665036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.665049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.665067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.665079 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.767384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.767424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.767434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.767448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.767456 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.868819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.869080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.869175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.869270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.869337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.977213 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.977259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.977272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.977294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:06 crc kubenswrapper[4835]: I0129 15:29:06.977308 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:06Z","lastTransitionTime":"2026-01-29T15:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.079792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.080050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.080185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.080296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.080438 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.183445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.183533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.183545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.183564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.183575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.286777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.286825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.286838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.286856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.286868 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.389553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.389593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.389603 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.389620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.389631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.459734 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:28:18.296281576 +0000 UTC Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.494217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.494266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.494277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.494294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.494308 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.596492 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.596531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.596542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.596558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.596570 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.699075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.699123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.699135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.699153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.699164 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.801605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.801658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.801677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.801699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.801712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.904415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.904493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.904511 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.904532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:07 crc kubenswrapper[4835]: I0129 15:29:07.904548 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:07Z","lastTransitionTime":"2026-01-29T15:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.008022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.008070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.008082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.008103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.008121 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.110762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.110820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.110844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.110865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.110880 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.214535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.214594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.214606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.214630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.214658 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.317612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.317656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.317668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.317687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.317702 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.420631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.420680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.420692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.420709 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.420723 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.460679 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:37:50.719933828 +0000 UTC Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.491587 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.491629 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.491672 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.491678 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:08 crc kubenswrapper[4835]: E0129 15:29:08.491755 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:08 crc kubenswrapper[4835]: E0129 15:29:08.491869 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:08 crc kubenswrapper[4835]: E0129 15:29:08.492365 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:08 crc kubenswrapper[4835]: E0129 15:29:08.492500 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.492620 4835 scope.go:117] "RemoveContainer" containerID="6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.507526 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.520154 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.523431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.523485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.523495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.523510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.523519 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.535814 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.542977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:08 crc kubenswrapper[4835]: E0129 15:29:08.543169 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:08 crc kubenswrapper[4835]: E0129 15:29:08.543257 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:29:24.543233456 +0000 UTC m=+66.736277170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.557193 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.576652 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.591926 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.606954 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.625930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.625997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.626013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.626040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.626056 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.627645 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:52.526228 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:52.526271 6257 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.658598 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.670927 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.693951 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.707548 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.719072 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.728964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.729010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.729020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.729039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.729052 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.734998 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.749031 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.771375 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.785985 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.831882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.831925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.831935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.831953 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.831964 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.880851 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/1.log" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.884577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.885017 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.909100 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.922860 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.935168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.935219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.935233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.935255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.935269 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:08Z","lastTransitionTime":"2026-01-29T15:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.945453 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.959660 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.976401 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:08 crc kubenswrapper[4835]: I0129 15:29:08.992937 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.015306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.028013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.037579 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.037778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.037804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.037814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.037828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.037843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.051312 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.066952 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.082365 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.095809 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.108204 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.123059 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.137886 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.139869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.139909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.139923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.139948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.139963 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.165437 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:52.526228 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:52.526271 6257 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.243039 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.243082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.243093 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.243112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.243125 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.346188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.346242 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.346253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.346273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.346286 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.449402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.449450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.449460 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.449493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.449523 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.461817 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:57:51.090615061 +0000 UTC Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.552112 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.552144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.552153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.552168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.552177 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.654390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.654424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.654433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.654448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.654457 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.758429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.758489 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.758499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.758515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.758528 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.862025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.862586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.862605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.862636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.862654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.890959 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/2.log" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.892017 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/1.log" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.895244 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8" exitCode=1 Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.895296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.895354 4835 scope.go:117] "RemoveContainer" containerID="6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.895979 4835 scope.go:117] "RemoveContainer" containerID="c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8" Jan 29 15:29:09 crc kubenswrapper[4835]: E0129 15:29:09.896170 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.919245 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.938968 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.959530 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.965896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.966154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.966171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.966196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.966214 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:09Z","lastTransitionTime":"2026-01-29T15:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.977531 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:09 crc kubenswrapper[4835]: I0129 15:29:09.996663 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.010291 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.026141 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.041215 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.056950 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.069774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.069853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.069868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.069895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.069910 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.078726 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.104115 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.119226 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.135951 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.148292 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.162835 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.173109 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.173157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.173168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.173188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.173199 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.175082 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.207665 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ab4b6705ec8c3f824361b91fd3ed4559d9cc71fb3c8a93432fcabc6c2243fd1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"cer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 15:28:52.526228 6257 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-etcd/etcd]} name:Service_openshift-etcd/etcd_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.253:2379: 10.217.5.253:9979:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {de17f0de-cfb1-4534-bb42-c40f5e050c73}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 15:28:52.526271 6257 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.276439 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.276505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.276515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.276537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.276550 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.361891 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.362219 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:29:42.362172383 +0000 UTC m=+84.555216047 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.380022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.380107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.380136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.380169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.380194 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.462149 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 07:03:12.156519452 +0000 UTC Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.463585 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.463615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.463634 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.463654 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463754 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463770 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463915 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463939 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463922 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463951 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463967 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.463866 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:42.463838386 +0000 UTC m=+84.656882060 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.464026 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:42.46401525 +0000 UTC m=+84.657058904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.464038 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:42.464033671 +0000 UTC m=+84.657077325 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.464021 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.464256 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:29:42.464197965 +0000 UTC m=+84.657241759 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.483583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.483643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.483655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.483676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.483690 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.491906 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.492009 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.492067 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.492313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.492541 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.492541 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.492700 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.492766 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.586556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.586599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.586609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.586622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.586632 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.690022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.690098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.690115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.690140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.690157 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.793668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.793704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.793798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.793836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.793846 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.896438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.896504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.896515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.896530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.896543 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.898335 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/2.log" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.901123 4835 scope.go:117] "RemoveContainer" containerID="c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8" Jan 29 15:29:10 crc kubenswrapper[4835]: E0129 15:29:10.901300 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.921037 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.936821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.952271 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.970278 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.983155 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.994677 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:10Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.999150 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.999177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.999189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.999204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:10 crc kubenswrapper[4835]: I0129 15:29:10.999216 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:10Z","lastTransitionTime":"2026-01-29T15:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.008548 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.026269 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.038149 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.056390 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.072667 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.088387 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.099574 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.101712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.101903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.102021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.102132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.102257 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.113018 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.122543 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.138319 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.151682 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.205383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.205485 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.205533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.205563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.205578 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.308881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.308922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.308931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.308947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.308959 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.412448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.412539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.412557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.412580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.412597 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.462406 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:39:59.691025863 +0000 UTC Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.515748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.515808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.515824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.515847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.515864 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.618433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.619046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.619299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.619541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.619766 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.714320 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.723153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.723208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.723225 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.723248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.723267 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.735776 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.739201 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.762811 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.781572 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.803380 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.818534 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.826097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.826137 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.826148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.826165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.826177 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.836971 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.853110 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.881251 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.896992 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.929354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.929428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.929448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.929503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.929524 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:11Z","lastTransitionTime":"2026-01-29T15:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.931190 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.951306 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.975352 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:11 crc kubenswrapper[4835]: I0129 15:29:11.995727 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:11Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.006593 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.017895 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.028406 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.032319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.032345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.032353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.032367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.032376 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.040211 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:12Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.134842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.134890 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.134913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.134934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.134949 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.237789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.237825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.237834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.237849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.237860 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.340207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.340258 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.340277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.340300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.340317 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.443947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.444021 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.444044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.444073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.444095 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.462717 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:36:01.83683876 +0000 UTC Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.491773 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.491795 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.492115 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.491818 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:12 crc kubenswrapper[4835]: E0129 15:29:12.492172 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:12 crc kubenswrapper[4835]: E0129 15:29:12.492274 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:12 crc kubenswrapper[4835]: E0129 15:29:12.492009 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:12 crc kubenswrapper[4835]: E0129 15:29:12.492397 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.547236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.547302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.547322 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.547346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.547363 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.650072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.650159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.650187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.650220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.650244 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.753634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.753702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.753725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.753756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.753777 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.856693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.856750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.856766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.856791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.856809 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.959275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.959336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.959356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.959380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:12 crc kubenswrapper[4835]: I0129 15:29:12.959400 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:12Z","lastTransitionTime":"2026-01-29T15:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.063815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.063881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.063899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.063925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.063945 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.166532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.166577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.166585 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.166606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.166615 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.268982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.269022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.269030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.269042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.269051 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.371386 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.371425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.371435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.371450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.371483 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.463892 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:54:05.602520991 +0000 UTC Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.473857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.473896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.473907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.473921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.473932 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.576524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.576558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.576569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.576583 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.576596 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.679227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.679259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.679267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.679279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.679288 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.782201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.782334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.782356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.782380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.782399 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.885735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.885772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.885783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.885818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.885827 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.988428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.988525 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.988541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.988563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:13 crc kubenswrapper[4835]: I0129 15:29:13.988581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:13Z","lastTransitionTime":"2026-01-29T15:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.091769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.091838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.091856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.091889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.091914 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.195193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.195239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.195252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.195271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.195282 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.298130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.298188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.298201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.298220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.298232 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.401739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.401808 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.401830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.401865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.401889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.464072 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:43:15.429066758 +0000 UTC Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.491585 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:14 crc kubenswrapper[4835]: E0129 15:29:14.491745 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.491752 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.491887 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.491799 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:14 crc kubenswrapper[4835]: E0129 15:29:14.492041 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:14 crc kubenswrapper[4835]: E0129 15:29:14.492204 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:14 crc kubenswrapper[4835]: E0129 15:29:14.492460 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.504611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.504692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.504715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.504743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.504767 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.607233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.607279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.607291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.607307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.607318 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.710722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.710778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.710804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.710833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.710854 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.814184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.814251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.814265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.814287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.814302 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.917128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.917169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.917179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.917195 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:14 crc kubenswrapper[4835]: I0129 15:29:14.917209 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:14Z","lastTransitionTime":"2026-01-29T15:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.020058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.020099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.020108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.020120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.020129 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.122880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.122915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.122924 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.122940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.122950 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.124263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.124315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.124325 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.124344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.124358 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: E0129 15:29:15.138971 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.143664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.143779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.143797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.143821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.143839 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: E0129 15:29:15.159189 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.165108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.165140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.165149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.165163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.165175 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: E0129 15:29:15.179719 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.184521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.184599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.184619 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.184648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.184668 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: E0129 15:29:15.198992 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.204005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.204045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.204057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.204075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.204086 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: E0129 15:29:15.219102 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:15Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:15 crc kubenswrapper[4835]: E0129 15:29:15.219288 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.225571 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.225627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.225637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.225657 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.225667 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.328932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.328975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.328987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.329004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.329015 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.432567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.432631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.432650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.432679 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.432695 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.464778 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:43:00.595641677 +0000 UTC Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.535697 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.535737 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.535752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.535770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.535790 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.638732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.638797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.638819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.638849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.638876 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.740816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.740856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.740866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.740886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.740900 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.842696 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.842733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.842742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.842755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.842764 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.944787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.944830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.944842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.944859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:15 crc kubenswrapper[4835]: I0129 15:29:15.944872 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:15Z","lastTransitionTime":"2026-01-29T15:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.047133 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.047180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.047193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.047220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.047234 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.149524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.149630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.149655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.149689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.149712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.255815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.255853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.255863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.255877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.255886 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.358459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.358512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.358521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.358535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.358544 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.461756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.461858 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.461883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.461916 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.461943 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.465762 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:21:21.61506514 +0000 UTC Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.491301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.491358 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.491339 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.491301 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:16 crc kubenswrapper[4835]: E0129 15:29:16.491529 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:16 crc kubenswrapper[4835]: E0129 15:29:16.491700 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:16 crc kubenswrapper[4835]: E0129 15:29:16.491850 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:16 crc kubenswrapper[4835]: E0129 15:29:16.491949 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.566270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.566347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.566364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.566386 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.566402 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.669327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.669370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.669381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.669397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.669409 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.772271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.772336 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.772352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.772406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.772424 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.875822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.876037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.876066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.876111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.876244 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.978652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.978712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.978741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.978764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:16 crc kubenswrapper[4835]: I0129 15:29:16.978782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:16Z","lastTransitionTime":"2026-01-29T15:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.081240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.081279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.081291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.081308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.081322 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.184401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.184552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.184581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.184609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.184631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.287365 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.287418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.287430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.287451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.287487 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.391032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.391169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.391194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.391226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.391251 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.466255 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:21:54.51558562 +0000 UTC Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.494362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.494424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.494443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.494493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.494514 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.596863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.596906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.596917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.596935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.596949 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.699526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.699569 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.699581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.699598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.699611 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.803033 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.803096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.803105 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.803121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.803131 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.906296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.906345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.906362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.906388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:17 crc kubenswrapper[4835]: I0129 15:29:17.906406 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:17Z","lastTransitionTime":"2026-01-29T15:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.009863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.009923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.009941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.009972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.009991 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.113376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.113447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.113499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.113526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.113546 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.217097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.217182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.217204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.217233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.217255 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.320458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.320567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.320590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.320620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.320641 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.423961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.424302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.424412 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.424544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.424648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.466685 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:23:03.045006795 +0000 UTC Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.491409 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.491494 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.491567 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.491582 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:18 crc kubenswrapper[4835]: E0129 15:29:18.491764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:18 crc kubenswrapper[4835]: E0129 15:29:18.491916 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:18 crc kubenswrapper[4835]: E0129 15:29:18.492126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:18 crc kubenswrapper[4835]: E0129 15:29:18.492248 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.509638 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.527448 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.528191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.528220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.528230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.528246 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.528259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.552616 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.570131 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.587219 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.605727 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.620821 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.630661 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.630711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.630725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.630745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.630757 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.638384 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.656432 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.668815 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.678980 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.690634 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.701748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.725209 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.733543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.733591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.733609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.733633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.733650 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.742759 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.774122 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.795509 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.816957 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:18Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.836061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.836108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.836120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.836135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.836144 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.938333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.938376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.938387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.938403 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:18 crc kubenswrapper[4835]: I0129 15:29:18.938413 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:18Z","lastTransitionTime":"2026-01-29T15:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.041888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.041949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.041967 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.041992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.042009 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.144647 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.144700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.144712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.144730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.144742 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.247755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.247799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.247810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.247828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.247841 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.350908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.350943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.350954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.350968 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.350978 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.454455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.454575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.454595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.454621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.454643 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.467268 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:50:06.592864022 +0000 UTC Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.558913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.558986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.559005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.559036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.559061 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.661957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.661991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.662000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.662014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.662024 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.764976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.765038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.765055 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.765077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.765094 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.868125 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.868208 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.868233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.868264 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.868286 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.971743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.971803 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.971818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.971837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:19 crc kubenswrapper[4835]: I0129 15:29:19.971850 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:19Z","lastTransitionTime":"2026-01-29T15:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.075064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.075152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.075172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.075203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.075223 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.178874 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.179019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.179052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.179090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.179132 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.282735 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.283294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.283447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.283645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.283804 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.387024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.387082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.387097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.387116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.387128 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.468195 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:46:29.704285102 +0000 UTC Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.490690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.490755 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.490780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.490816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.490838 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.491512 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.491664 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.491829 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.491732 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:20 crc kubenswrapper[4835]: E0129 15:29:20.491696 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:20 crc kubenswrapper[4835]: E0129 15:29:20.492185 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:20 crc kubenswrapper[4835]: E0129 15:29:20.492439 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:20 crc kubenswrapper[4835]: E0129 15:29:20.492718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.594250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.594291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.594302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.594317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.594328 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.696862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.696907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.696918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.696936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.696948 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.799908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.799957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.799966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.799986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.800001 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.902869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.902911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.902920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.902937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:20 crc kubenswrapper[4835]: I0129 15:29:20.902952 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:20Z","lastTransitionTime":"2026-01-29T15:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.005668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.005726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.005743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.005767 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.005786 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.108062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.108101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.108108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.108121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.108129 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.210794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.210841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.210853 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.210870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.210882 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.312734 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.312772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.312780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.312795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.312804 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.415558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.415599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.415611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.415628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.415640 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.469084 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:59:27.390964639 +0000 UTC Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.518934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.518976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.518987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.519004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.519015 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.621792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.621847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.621860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.621880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.621897 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.724438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.724546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.724563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.724588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.724605 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.827896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.827982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.828000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.828028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.828049 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.930826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.930870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.930881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.930900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:21 crc kubenswrapper[4835]: I0129 15:29:21.930913 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:21Z","lastTransitionTime":"2026-01-29T15:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.033980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.034057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.034069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.034094 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.034109 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.137862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.137902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.137912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.137929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.137942 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.241842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.241973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.241992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.242018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.242037 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.344689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.344739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.344785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.344820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.344840 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.448032 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.448086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.448097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.448116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.448129 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.469433 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:41:40.028453281 +0000 UTC Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.491948 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.491973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.492044 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:22 crc kubenswrapper[4835]: E0129 15:29:22.492190 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.492256 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:22 crc kubenswrapper[4835]: E0129 15:29:22.492412 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:22 crc kubenswrapper[4835]: E0129 15:29:22.492511 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:22 crc kubenswrapper[4835]: E0129 15:29:22.492564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.550790 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.550835 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.550847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.550866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.550877 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.653834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.653872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.653880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.653894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.653904 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.756551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.756590 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.756601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.756615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.756627 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.859577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.859644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.859662 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.859691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.859716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.963310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.963366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.963380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.963401 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:22 crc kubenswrapper[4835]: I0129 15:29:22.963419 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:22Z","lastTransitionTime":"2026-01-29T15:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.067090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.067179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.067199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.067224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.067242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.170908 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.170982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.171014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.171034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.171048 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.275059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.275443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.275457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.275496 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.275508 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.377891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.377937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.377949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.377974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.377991 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.469998 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:12:10.233181339 +0000 UTC Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.481020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.481061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.481077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.481100 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.481116 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.492583 4835 scope.go:117] "RemoveContainer" containerID="c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8" Jan 29 15:29:23 crc kubenswrapper[4835]: E0129 15:29:23.493053 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.503914 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.583837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.583907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.583920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.583942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.583976 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.686896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.686952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.686963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.686982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.686995 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.789973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.790023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.790031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.790050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.790060 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.893106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.893162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.893172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.893192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.893203 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.997202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.997285 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.997306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.997337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:23 crc kubenswrapper[4835]: I0129 15:29:23.997358 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:23Z","lastTransitionTime":"2026-01-29T15:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.100716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.100770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.100783 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.100804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.100820 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.203786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.203827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.203843 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.203861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.203873 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.333920 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.333960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.333974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.333993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.334006 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.437860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.437922 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.437940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.437962 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.437984 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.470792 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:26:15.429940153 +0000 UTC Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.491337 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.491337 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:24 crc kubenswrapper[4835]: E0129 15:29:24.491543 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.491581 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.491335 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:24 crc kubenswrapper[4835]: E0129 15:29:24.491746 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:24 crc kubenswrapper[4835]: E0129 15:29:24.491863 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:24 crc kubenswrapper[4835]: E0129 15:29:24.491946 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.540427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.540528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.540556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.540589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.540618 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.635821 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:24 crc kubenswrapper[4835]: E0129 15:29:24.636075 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:24 crc kubenswrapper[4835]: E0129 15:29:24.636171 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:29:56.636151321 +0000 UTC m=+98.829194975 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.644132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.644167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.644177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.644193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.644203 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.747147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.747182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.747194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.747212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.747224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.850290 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.850354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.850368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.850391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.850406 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.953582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.953655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.953677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.953708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:24 crc kubenswrapper[4835]: I0129 15:29:24.953723 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:24Z","lastTransitionTime":"2026-01-29T15:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.056451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.056545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.056563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.056586 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.056603 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.159889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.159970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.159991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.160019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.160037 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.262965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.263016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.263030 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.263052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.263068 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.348651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.348699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.348711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.348730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.348743 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: E0129 15:29:25.362375 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.365818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.365869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.365884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.365906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.365919 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: E0129 15:29:25.386040 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.394832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.394878 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.394891 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.394909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.394919 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: E0129 15:29:25.420737 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.428528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.428575 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.428588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.428612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.428628 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: E0129 15:29:25.448773 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.453000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.453049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.453058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.453074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.453085 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: E0129 15:29:25.469931 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:25Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:25 crc kubenswrapper[4835]: E0129 15:29:25.470113 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.471315 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:27:50.0188585 +0000 UTC Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.471898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.471945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.471960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.471982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.472003 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.575014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.575078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.575089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.575110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.575124 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.677944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.677989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.678008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.678028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.678039 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.779898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.779943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.779954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.779969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.779981 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.882759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.882792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.882809 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.882829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.882841 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.986921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.986975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.986994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.987017 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:25 crc kubenswrapper[4835]: I0129 15:29:25.987033 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:25Z","lastTransitionTime":"2026-01-29T15:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.089909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.089948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.089957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.089972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.089983 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.193270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.193317 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.193326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.193343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.193353 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.296911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.296987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.297009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.297041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.297073 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.400334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.400404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.400417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.400441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.400457 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.472112 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:06:22.757142828 +0000 UTC Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.491774 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.491916 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.491927 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:26 crc kubenswrapper[4835]: E0129 15:29:26.492044 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:26 crc kubenswrapper[4835]: E0129 15:29:26.492204 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.492226 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:26 crc kubenswrapper[4835]: E0129 15:29:26.492358 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:26 crc kubenswrapper[4835]: E0129 15:29:26.492527 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.503625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.503699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.503723 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.503754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.503775 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.606927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.606983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.606998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.607029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.607045 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.709705 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.709753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.709765 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.709784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.709796 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.813434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.813502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.813514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.813530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.813538 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.917111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.917699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.917915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.918113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.918716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:26Z","lastTransitionTime":"2026-01-29T15:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.955244 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/0.log" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.955318 4835 generic.go:334] "Generic (PLEG): container finished" podID="9537ec35-484e-4eeb-b081-e453750fb39e" containerID="93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739" exitCode=1 Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.955361 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerDied","Data":"93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739"} Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.955981 4835 scope.go:117] "RemoveContainer" containerID="93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.979513 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:26 crc kubenswrapper[4835]: I0129 15:29:26.994144 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:26Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.017392 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.021361 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.021382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.021390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.021404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.021414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.038330 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.054191 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.076229 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.088358 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.105383 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.121284 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.124649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.124805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.124906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.124992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.125080 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.138708 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.152516 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.160902 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.178500 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.192094 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.206173 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.221214 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.227793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.227910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.227974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.228051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.228110 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.236879 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.249788 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.272419 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.330980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.331038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.331052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.331072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.331369 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.434549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.434646 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.434665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.434692 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.434712 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.473280 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:57:04.747204478 +0000 UTC Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.537139 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.537410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.537518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.537628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.537717 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.641364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.641414 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.641424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.641443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.641456 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.745357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.745406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.745422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.745445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.745491 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.847927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.847975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.847987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.848005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.848019 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.951018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.951059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.951069 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.951083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.951093 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:27Z","lastTransitionTime":"2026-01-29T15:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.960109 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/0.log" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.960150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerStarted","Data":"a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759"} Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.976502 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:27 crc kubenswrapper[4835]: I0129 15:29:27.991759 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:27Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.021044 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.034618 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.054079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.054114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.054122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.054136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.054145 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.055254 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.069596 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.088207 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.106513 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.124182 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.141112 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.157241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.157289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.157305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.157329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.157344 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.160134 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.175663 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.194014 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.205220 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.219908 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.237153 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.248336 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.259843 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.260645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.260695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.260704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.260721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.260730 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.271858 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.363782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.363863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.363881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.363907 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.363922 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.466933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.466975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.466987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.467004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.467017 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.474108 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:47:36.9206971 +0000 UTC Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.491756 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.491846 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.492101 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:28 crc kubenswrapper[4835]: E0129 15:29:28.492179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.492238 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:28 crc kubenswrapper[4835]: E0129 15:29:28.492509 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:28 crc kubenswrapper[4835]: E0129 15:29:28.492713 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:28 crc kubenswrapper[4835]: E0129 15:29:28.492829 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.514356 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.530713 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.550970 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.562559 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.568780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.568828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.568845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.568868 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.568883 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.579217 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.596170 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.610445 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.631412 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.643066 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.665461 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.671416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.671491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.671510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.671532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.671547 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.681127 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.695971 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.709863 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.723031 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.737215 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.750995 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.762150 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.773986 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.774029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.774042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.774062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.774074 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.778900 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.791863 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:28Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.876558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.876599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.876609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.876625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.876636 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.978458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.978510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.978521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.978540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:28 crc kubenswrapper[4835]: I0129 15:29:28.978551 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:28Z","lastTransitionTime":"2026-01-29T15:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.081145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.081209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.081217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.081230 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.081241 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.183672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.183715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.183725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.183743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.183755 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.286591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.286633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.286641 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.286655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.286664 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.389319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.389367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.389379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.389398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.389411 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.474687 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:52:42.813750192 +0000 UTC Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.491576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.491606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.491616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.491629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.491639 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.594051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.594087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.594098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.594114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.594126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.696404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.696455 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.696483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.696495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.696505 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.798845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.798876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.798885 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.798899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.798909 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.901233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.901279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.901291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.901308 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:29 crc kubenswrapper[4835]: I0129 15:29:29.901320 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:29Z","lastTransitionTime":"2026-01-29T15:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.003408 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.003498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.003518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.003539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.003558 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.106345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.106390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.106404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.106422 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.106434 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.209059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.209413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.209423 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.209437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.209447 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.311491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.311529 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.311541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.311559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.311571 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.414098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.414136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.414145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.414159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.414168 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.475753 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:07:43.840346878 +0000 UTC Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.491958 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.492042 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.492070 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.491970 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:30 crc kubenswrapper[4835]: E0129 15:29:30.492211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:30 crc kubenswrapper[4835]: E0129 15:29:30.492292 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:30 crc kubenswrapper[4835]: E0129 15:29:30.492383 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:30 crc kubenswrapper[4835]: E0129 15:29:30.492510 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.516573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.516605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.516613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.516631 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.516641 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.619342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.619390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.619400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.619416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.619429 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.721841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.721917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.721931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.721956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.721970 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.824539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.824591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.824600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.824614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.824623 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.927705 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.927804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.927825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.927859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:30 crc kubenswrapper[4835]: I0129 15:29:30.927896 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:30Z","lastTransitionTime":"2026-01-29T15:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.030591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.030670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.030686 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.030706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.030718 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.134826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.134882 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.134895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.134917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.134931 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.238154 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.238207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.238220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.238245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.238264 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.341914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.341969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.341981 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.342000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.342010 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.444680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.444728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.444740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.444760 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.444773 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.476134 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:04:19.626914343 +0000 UTC Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.547318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.547370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.547385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.547405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.547418 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.651617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.651682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.651701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.651727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.651746 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.754820 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.754877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.754898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.754928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.754951 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.858453 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.858553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.858577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.858607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.858630 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.962386 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.962441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.962458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.962524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:31 crc kubenswrapper[4835]: I0129 15:29:31.962574 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:31Z","lastTransitionTime":"2026-01-29T15:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.066294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.066574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.066613 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.066702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.066771 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.170944 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.171046 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.171074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.171113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.171140 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.273876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.273932 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.273945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.273964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.273979 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.377228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.377314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.377333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.377364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.377386 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.476549 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:04:03.211129263 +0000 UTC Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.480785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.480861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.480875 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.480896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.480910 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.491321 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.491413 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.491344 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.491339 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:32 crc kubenswrapper[4835]: E0129 15:29:32.491609 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:32 crc kubenswrapper[4835]: E0129 15:29:32.491720 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:32 crc kubenswrapper[4835]: E0129 15:29:32.491804 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:32 crc kubenswrapper[4835]: E0129 15:29:32.491880 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.583930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.584015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.584037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.584073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.584105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.687134 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.687183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.687192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.687211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.687221 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.791904 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.791997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.792015 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.792071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.792090 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.895957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.896020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.896045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.896077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.896099 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.999519 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.999581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.999594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.999617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:32 crc kubenswrapper[4835]: I0129 15:29:32.999629 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:32Z","lastTransitionTime":"2026-01-29T15:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.103224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.103309 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.103327 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.103353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.103373 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.206452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.206924 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.207014 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.207117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.207212 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.310672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.310720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.310736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.310761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.310778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.415226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.415302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.415316 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.415660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.415693 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.477069 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:46:14.961859972 +0000 UTC Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.522147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.522864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.523043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.523550 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.523667 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.627371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.627418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.627435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.627458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.627508 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.730607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.730653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.730672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.730695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.730711 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.833558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.833609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.833626 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.833652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.833671 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.936678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.936748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.936763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.936782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:33 crc kubenswrapper[4835]: I0129 15:29:33.936813 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:33Z","lastTransitionTime":"2026-01-29T15:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.039984 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.040493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.040596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.040694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.040794 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.144321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.144732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.144801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.144887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.144953 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.248974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.249395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.249483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.249573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.249645 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.353746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.353824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.353862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.353887 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.353901 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.457699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.457770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.457786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.457813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.457834 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.478119 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 04:29:31.473162337 +0000 UTC Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.491799 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.491831 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.491831 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.491861 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:34 crc kubenswrapper[4835]: E0129 15:29:34.492032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:34 crc kubenswrapper[4835]: E0129 15:29:34.492277 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:34 crc kubenswrapper[4835]: E0129 15:29:34.492383 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:34 crc kubenswrapper[4835]: E0129 15:29:34.492597 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.561664 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.561712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.561721 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.561743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.561755 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.672497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.672581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.672599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.672623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.672651 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.776284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.776375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.776396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.776435 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.776459 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.879954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.880012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.880022 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.880040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.880051 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.983071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.983132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.983146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.983167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:34 crc kubenswrapper[4835]: I0129 15:29:34.983194 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:34Z","lastTransitionTime":"2026-01-29T15:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.086009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.086087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.086106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.086141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.086166 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.189130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.189212 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.189231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.189265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.189289 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.293547 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.293640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.293677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.293712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.293734 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.397070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.397128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.397145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.397167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.397185 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.479280 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:04:07.335219128 +0000 UTC Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.500570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.500674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.500695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.500719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.500739 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.611204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.611266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.611283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.611306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.611324 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.636637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.636691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.636711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.636733 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.636752 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: E0129 15:29:35.655666 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.660677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.660745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.660763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.660786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.660800 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: E0129 15:29:35.674337 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.678869 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.678910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.678919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.678938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.678948 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: E0129 15:29:35.694386 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.698650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.698684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.698694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.698712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.698722 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: E0129 15:29:35.714936 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.719187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.719277 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.719303 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.719340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.719364 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: E0129 15:29:35.735413 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:35Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:35 crc kubenswrapper[4835]: E0129 15:29:35.735639 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.737370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.737420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.737429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.737444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.737454 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.840889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.840980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.841004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.841035 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.841058 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.944942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.945019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.945042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.945080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:35 crc kubenswrapper[4835]: I0129 15:29:35.945101 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:35Z","lastTransitionTime":"2026-01-29T15:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.048349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.048406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.048418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.048439 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.048452 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.152020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.152061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.152070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.152090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.152099 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.255897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.256292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.256400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.256544 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.256717 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.359580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.359633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.359644 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.359667 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.359686 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.463135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.463444 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.463532 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.463612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.463696 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.479751 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:18:05.721041601 +0000 UTC Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.491244 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.491279 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:36 crc kubenswrapper[4835]: E0129 15:29:36.491517 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.491284 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:36 crc kubenswrapper[4835]: E0129 15:29:36.491652 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.491638 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:36 crc kubenswrapper[4835]: E0129 15:29:36.491844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:36 crc kubenswrapper[4835]: E0129 15:29:36.492380 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.566788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.566880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.566911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.566949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.566972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.671454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.671581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.671606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.671640 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.671665 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.774355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.774418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.774439 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.774497 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.774522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.878279 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.878371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.878400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.878438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.878506 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.982970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.983053 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.983079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.983132 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:36 crc kubenswrapper[4835]: I0129 15:29:36.983160 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:36Z","lastTransitionTime":"2026-01-29T15:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.087620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.087713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.087739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.087776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.087797 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.191847 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.191927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.191950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.191982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.192004 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.295202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.295291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.295312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.295343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.295365 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.398345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.398428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.398445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.398502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.398522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.480734 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:04:08.516482766 +0000 UTC Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.502978 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.503044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.503072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.503107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.503131 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.606243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.606310 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.606328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.606355 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.606378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.709260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.709301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.709315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.709335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.709348 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.812743 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.812816 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.812837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.812864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.812884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.916348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.916840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.916919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.916990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:37 crc kubenswrapper[4835]: I0129 15:29:37.917065 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:37Z","lastTransitionTime":"2026-01-29T15:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.019928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.020426 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.020713 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.020899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.021055 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.124859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.124914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.124934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.124960 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.124977 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.227822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.227893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.227910 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.227938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.227957 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.331382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.331449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.331499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.331533 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.331553 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.435268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.435387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.435405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.435430 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.435447 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.481217 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:31:57.810884942 +0000 UTC Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.491851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.491859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.491902 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.492372 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:38 crc kubenswrapper[4835]: E0129 15:29:38.492938 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:38 crc kubenswrapper[4835]: E0129 15:29:38.493070 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:38 crc kubenswrapper[4835]: E0129 15:29:38.492962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.492984 4835 scope.go:117] "RemoveContainer" containerID="c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8" Jan 29 15:29:38 crc kubenswrapper[4835]: E0129 15:29:38.493018 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.512075 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.539099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.539169 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.539188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.539215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.539232 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.541101 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.557976 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.582194 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.595593 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.613632 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.634099 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.641912 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.641945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.641957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.641977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.641991 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.650809 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.666450 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.683265 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.698013 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.717114 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.741300 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.745426 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.745665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.745764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.745875 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.745958 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.761627 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.780833 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.801565 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.817968 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.830889 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.848233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.848284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.848312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.848341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.848363 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.851683 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:38Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.951142 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.951180 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.951189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.951204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:38 crc kubenswrapper[4835]: I0129 15:29:38.951215 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:38Z","lastTransitionTime":"2026-01-29T15:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.012972 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/2.log" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.016039 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.016541 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.034395 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.050493 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.053391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.053505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.053524 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.054038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.054102 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.070862 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.087931 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.107409 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.152573 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.157682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.157716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.157726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.157740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.157749 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.166836 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.191922 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.217853 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.243094 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.259883 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.261782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.261811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.261824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.261844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.261857 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.289158 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.305699 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.320883 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.336851 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.350891 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.365245 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.365291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.365302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.365321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.365332 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.369369 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.390107 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.406767 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:39Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.468624 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.468669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.468678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.468698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.468708 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.482036 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 07:18:53.859494104 +0000 UTC Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.572071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.572128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.572141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.572162 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.572177 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.676219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.676281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.676299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.676328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.676351 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.779027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.779107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.779129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.779160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.779186 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.883148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.883248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.883275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.883318 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.883347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.986531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.986614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.986628 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.986654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:39 crc kubenswrapper[4835]: I0129 15:29:39.986670 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:39Z","lastTransitionTime":"2026-01-29T15:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.025970 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/3.log" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.026999 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/2.log" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.031966 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" exitCode=1 Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.032021 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.032073 4835 scope.go:117] "RemoveContainer" containerID="c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.033130 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:29:40 crc kubenswrapper[4835]: E0129 15:29:40.033569 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.066109 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.091601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.091714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.091739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.091777 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.091799 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.093233 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.113254 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.138645 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.159615 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.177043 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.195852 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.195926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.195945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.195975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.195997 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.196419 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.232360 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0656075d6e7d5fe68927d576c74c1cc2943d228c57c15a95ccad38efdded1c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:09Z\\\",\\\"message\\\":\\\"od openshift-multus/network-metrics-daemon-p2hj4\\\\nI0129 15:29:09.404579 6465 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-p2hj4 in node crc\\\\nI0129 15:29:09.404590 6465 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI0129 15:29:09.404605 6465 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.146439ms\\\\nF0129 15:29:09.404218 6465 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:09Z is \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:39Z\\\",\\\"message\\\":\\\"handler 2\\\\nI0129 15:29:39.527741 6857 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.528810 6857 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529113 6857 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529148 6857 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529204 6857 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529923 6857 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:29:39.529972 6857 factory.go:656] Stopping watch factory\\\\nI0129 15:29:39.530001 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:29:39.550846 6857 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 15:29:39.550887 6857 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 15:29:39.550971 6857 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:29:39.551014 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 15:29:39.551141 6857 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.252395 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.277207 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.298964 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.300297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.300356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.300374 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.300394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.300412 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.318334 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.340277 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.359454 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.384601 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.404669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.405140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.404728 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.405344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.405813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.405995 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.421932 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.442254 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.458581 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:40Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.483108 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:35:49.468449599 +0000 UTC Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.491968 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.492080 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:40 crc kubenswrapper[4835]: E0129 15:29:40.492154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.492185 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:40 crc kubenswrapper[4835]: E0129 15:29:40.492296 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:40 crc kubenswrapper[4835]: E0129 15:29:40.492383 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.492663 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:40 crc kubenswrapper[4835]: E0129 15:29:40.492813 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.509040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.509089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.509101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.509121 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.509134 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.611982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.612031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.612044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.612063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.612077 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.715817 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.715870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.715879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.715897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.715907 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.818639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.818689 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.818699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.818712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.818722 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.920773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.920821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.920832 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.920851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:40 crc kubenswrapper[4835]: I0129 15:29:40.920865 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:40Z","lastTransitionTime":"2026-01-29T15:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.024534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.024581 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.024593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.024610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.024622 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.038360 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/3.log" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.042588 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:29:41 crc kubenswrapper[4835]: E0129 15:29:41.042789 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.059748 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.078354 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.097251 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.110873 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.127347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.127557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.127792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.127888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.127991 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.129532 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.143533 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.164351 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.184589 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.206407 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.220723 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.230604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.231008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.231024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.231045 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.231060 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.235051 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.250845 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.263933 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.284424 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:39Z\\\",\\\"message\\\":\\\"handler 2\\\\nI0129 15:29:39.527741 6857 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.528810 6857 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529113 6857 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529148 6857 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529204 6857 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529923 6857 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:29:39.529972 6857 factory.go:656] Stopping watch factory\\\\nI0129 15:29:39.530001 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:29:39.550846 6857 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 15:29:39.550887 6857 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 15:29:39.550971 6857 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:29:39.551014 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 15:29:39.551141 6857 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.300077 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.334751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.334836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.334860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.334894 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.334915 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.335781 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.348346 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.362382 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.378090 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:41Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.437359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.437412 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.437424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.437441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.437454 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.486617 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:27:34.778596526 +0000 UTC Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.539353 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.539379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.539387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.539400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.539408 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.643028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.643086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.643107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.643130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.643147 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.746542 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.746605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.746623 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.746652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.746675 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.849862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.849923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.849940 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.849965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.849986 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.952593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.952671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.952688 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.952707 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:41 crc kubenswrapper[4835]: I0129 15:29:41.952721 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:41Z","lastTransitionTime":"2026-01-29T15:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.055260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.055514 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.055579 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.055687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.055751 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.158608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.158930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.159057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.159173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.159302 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.261218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.261270 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.261284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.261299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.261308 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.363341 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.363378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.363388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.363403 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.363414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.394108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.394400 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.394332779 +0000 UTC m=+148.587376483 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.467218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.467273 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.467292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.467315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.467332 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.486737 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:16:31.148070504 +0000 UTC Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.491596 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.491799 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.492120 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.492232 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.492434 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.492711 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.492052 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.493274 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.494933 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.494993 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.495043 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.495101 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495168 4835 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495268 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.495236189 +0000 UTC m=+148.688279883 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495282 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495320 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495346 4835 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495414 4835 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495352 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495564 4835 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495600 4835 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495520 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.495457474 +0000 UTC m=+148.688501168 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495677 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.495653408 +0000 UTC m=+148.688697912 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 15:29:42 crc kubenswrapper[4835]: E0129 15:29:42.495765 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 15:30:46.495751011 +0000 UTC m=+148.688794705 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.570193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.570281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.570299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.570320 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.570338 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.673844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.673917 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.673931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.673956 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.673972 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.777078 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.777133 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.777148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.777172 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.777187 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.881128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.881183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.881194 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.881216 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.881231 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.984880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.984957 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.984971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.985000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:42 crc kubenswrapper[4835]: I0129 15:29:42.985016 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:42Z","lastTransitionTime":"2026-01-29T15:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.088627 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.088716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.088741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.088768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.088862 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.192302 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.192357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.192370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.192390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.192405 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.295102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.295165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.295181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.295206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.295224 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.398006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.398052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.398067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.398089 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.398105 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.487132 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:35:14.912609509 +0000 UTC Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.500598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.500656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.500674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.500700 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.500719 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.603615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.603669 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.603680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.603698 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.603710 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.706787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.706836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.706880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.706899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.706911 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.809334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.809381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.809392 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.809410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.809421 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.912083 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.912136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.912153 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.912187 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:43 crc kubenswrapper[4835]: I0129 15:29:43.912205 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:43Z","lastTransitionTime":"2026-01-29T15:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.015219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.015268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.015287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.015315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.015337 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.118244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.118305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.118323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.118348 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.118365 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.221123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.221160 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.221171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.221184 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.221194 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.324025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.324079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.324096 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.324120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.324138 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.426637 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.426706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.426724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.426754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.426780 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.487969 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:38:12.229034811 +0000 UTC Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.491414 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.491547 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.491549 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:44 crc kubenswrapper[4835]: E0129 15:29:44.491714 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.491757 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:44 crc kubenswrapper[4835]: E0129 15:29:44.491923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:44 crc kubenswrapper[4835]: E0129 15:29:44.492051 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:44 crc kubenswrapper[4835]: E0129 15:29:44.492190 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.529856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.529915 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.529938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.529966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.529990 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.633652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.633719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.633736 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.633763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.633782 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.736841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.736902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.736926 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.736954 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.736978 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.839837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.840296 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.840545 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.840818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.841032 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.944385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.944730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.944802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.944867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:44 crc kubenswrapper[4835]: I0129 15:29:44.944942 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:44Z","lastTransitionTime":"2026-01-29T15:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.048630 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.048694 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.048719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.048752 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.048778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.151186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.151237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.151249 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.151267 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.151279 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.254011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.254061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.254097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.254119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.254135 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.357228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.357283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.357307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.357335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.357356 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.460048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.460123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.460145 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.460175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.460217 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.488087 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:37:52.579297735 +0000 UTC Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.562500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.562562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.562580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.562606 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.562625 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.665611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.665695 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.665719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.665747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.665768 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.767945 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.767993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.768006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.768023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.768036 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.871503 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.871541 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.871549 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.871563 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.871572 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.966346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.966398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.966413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.966429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.966440 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:45 crc kubenswrapper[4835]: E0129 15:29:45.985981 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:45Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.990670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.990725 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.990763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.990805 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:45 crc kubenswrapper[4835]: I0129 15:29:45.990831 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:45Z","lastTransitionTime":"2026-01-29T15:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.008100 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.013555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.013611 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.013635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.013668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.013691 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.034007 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.042793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.043071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.043151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.043253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.043343 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.062860 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.067366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.067739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.067931 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.068023 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.068107 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.085712 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:46Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.085852 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.087527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.087588 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.087609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.087636 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.087654 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.190992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.191334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.191513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.191699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.191880 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.295182 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.295226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.295236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.295252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.295264 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.398691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.398744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.398756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.398774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.398785 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.488541 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 03:18:05.221536912 +0000 UTC Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.492059 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.492169 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.492246 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.492311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.492348 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.492360 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.492435 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:46 crc kubenswrapper[4835]: E0129 15:29:46.492537 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.501251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.501282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.501293 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.501306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.501319 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.603897 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.603964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.604019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.604049 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.604070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.707292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.707337 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.707347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.707363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.707375 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.811062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.811141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.811167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.811199 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.811221 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.914966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.915041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.915060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.915084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:46 crc kubenswrapper[4835]: I0129 15:29:46.915103 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:46Z","lastTransitionTime":"2026-01-29T15:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.019140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.019239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.019257 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.019281 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.019301 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.122002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.122058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.122074 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.122100 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.122116 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.225350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.225417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.225439 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.225487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.225505 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.328793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.328856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.328872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.328899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.328917 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.431141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.431204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.431223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.431247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.431266 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.488943 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:46:45.468954926 +0000 UTC Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.534061 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.534127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.534149 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.534178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.534204 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.637896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.637975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.638000 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.638031 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.638054 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.741909 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.741974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.741995 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.742020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.742040 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.845291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.845363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.845383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.845406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.845423 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.948729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.948804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.948827 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.948859 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:47 crc kubenswrapper[4835]: I0129 15:29:47.948878 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:47Z","lastTransitionTime":"2026-01-29T15:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.053010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.053050 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.053065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.053082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.053095 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.156982 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.157062 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.157085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.157128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.157150 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.260659 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.260710 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.260722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.260741 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.260758 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.363400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.363448 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.363459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.363505 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.363518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.466712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.466761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.466772 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.466800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.466815 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.489290 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:09:40.599437832 +0000 UTC Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.495275 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:48 crc kubenswrapper[4835]: E0129 15:29:48.495712 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.496223 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:48 crc kubenswrapper[4835]: E0129 15:29:48.496501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.502063 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.502154 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:48 crc kubenswrapper[4835]: E0129 15:29:48.502356 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:48 crc kubenswrapper[4835]: E0129 15:29:48.502703 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.518954 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.535549 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.560666 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:39Z\\\",\\\"message\\\":\\\"handler 2\\\\nI0129 15:29:39.527741 6857 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.528810 6857 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529113 6857 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529148 6857 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529204 6857 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529923 6857 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:29:39.529972 6857 factory.go:656] Stopping watch factory\\\\nI0129 15:29:39.530001 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:29:39.550846 6857 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 15:29:39.550887 6857 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 15:29:39.550971 6857 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:29:39.551014 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 15:29:39.551141 6857 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.569936 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.570005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.570018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.570056 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.570070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.579354 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.611890 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.628800 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.651081 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.668096 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.672911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.672942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.672950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.672963 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.672973 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.684146 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.701877 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.716217 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.730869 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.743755 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.762797 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.776879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.776934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.776950 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.776972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.776988 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.782838 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.799378 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.817020 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.832799 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.846134 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:48Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.881111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.881240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.881255 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.881276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.881292 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.984877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.984964 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.984976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.984996 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:48 crc kubenswrapper[4835]: I0129 15:29:48.985010 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:48Z","lastTransitionTime":"2026-01-29T15:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.087712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.087761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.087776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.087796 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.087811 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.190072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.190119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.190129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.190143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.190153 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.292989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.293081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.293098 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.293117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.293131 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.395949 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.395993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.396005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.396024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.396039 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.489997 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:52:47.609916937 +0000 UTC Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.498426 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.498459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.498506 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.498527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.498541 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.601428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.601756 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.601993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.602116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.602415 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.704708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.705433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.705529 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.705609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.705669 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.808712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.808973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.809066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.809151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.809333 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.911749 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.912013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.912103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.912238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:49 crc kubenswrapper[4835]: I0129 15:29:49.912362 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:49Z","lastTransitionTime":"2026-01-29T15:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.014726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.014789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.014812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.014842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.014866 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.117558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.117596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.117605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.117620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.117631 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.220452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.220744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.220837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.220918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.221001 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.323434 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.323488 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.323500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.323515 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.323525 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.426178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.426635 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.426728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.426854 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.426943 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.490368 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 06:13:18.906970379 +0000 UTC Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.490967 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.490967 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.491045 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.491049 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:50 crc kubenswrapper[4835]: E0129 15:29:50.491154 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:50 crc kubenswrapper[4835]: E0129 15:29:50.491256 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:50 crc kubenswrapper[4835]: E0129 15:29:50.491344 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:50 crc kubenswrapper[4835]: E0129 15:29:50.491414 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.529223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.529253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.529261 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.529274 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.529282 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.631567 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.631610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.631622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.631638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.631650 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.734001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.734043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.734057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.734073 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.734087 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.836081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.836120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.836129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.836144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.836154 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.938990 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.939036 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.939048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.939064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:50 crc kubenswrapper[4835]: I0129 15:29:50.939075 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:50Z","lastTransitionTime":"2026-01-29T15:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.042371 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.042440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.042453 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.042486 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.042500 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.145311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.145728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.145952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.146107 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.146245 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.249147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.249202 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.249214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.249239 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.249255 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.352867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.352933 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.352943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.352959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.352969 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.455980 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.456066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.456085 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.456115 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.456138 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.490837 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:11:47.14577149 +0000 UTC Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.558573 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.558608 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.558617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.558632 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.558641 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.661332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.661385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.661402 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.661424 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.661442 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.764340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.764398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.764417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.764447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.764518 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.866939 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.866970 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.866978 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.867007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.867016 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.969041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.969118 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.969135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.969159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:51 crc kubenswrapper[4835]: I0129 15:29:51.969178 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:51Z","lastTransitionTime":"2026-01-29T15:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.072609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.072656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.072668 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.072682 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.072694 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.175178 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.175233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.175243 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.175260 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.175269 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.278634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.278680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.278691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.278708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.278720 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.381712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.381774 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.381785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.381802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.381814 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.484539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.484584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.484594 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.484612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.484624 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.490918 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:16:54.088891279 +0000 UTC Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.491029 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.491094 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.491065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.491116 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:52 crc kubenswrapper[4835]: E0129 15:29:52.491261 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:52 crc kubenswrapper[4835]: E0129 15:29:52.491346 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:52 crc kubenswrapper[4835]: E0129 15:29:52.491536 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:52 crc kubenswrapper[4835]: E0129 15:29:52.491650 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.588871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.588913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.588923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.588938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.588949 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.692190 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.692236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.692247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.692265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.692279 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.794161 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.794197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.794205 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.794218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.794227 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.896060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.896099 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.896111 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.896126 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.896139 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.998557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.998621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.998633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.998648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:52 crc kubenswrapper[4835]: I0129 15:29:52.998658 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:52Z","lastTransitionTime":"2026-01-29T15:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.100605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.100638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.100649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.100663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.100674 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.202780 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.202812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.202822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.202834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.202843 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.305335 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.305584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.305658 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.305724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.305778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.407973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.408011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.408020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.408033 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.408042 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.491979 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:57:48.913776249 +0000 UTC Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.510593 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.510629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.510639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.510655 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.510685 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.613409 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.613441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.613449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.613479 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.613490 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.716060 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.716101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.716110 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.716141 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.716157 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.819342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.819432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.819452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.819527 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.819553 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.922906 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.923534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.923983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.924377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:53 crc kubenswrapper[4835]: I0129 15:29:53.924786 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:53Z","lastTransitionTime":"2026-01-29T15:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.027159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.027415 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.027576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.027677 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.027791 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.129930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.129983 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.129993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.130011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.130023 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.233306 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.233785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.234043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.234241 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.234449 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.338058 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.338376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.338446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.338543 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.338602 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.441009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.441042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.441051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.441065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.441074 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.491358 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.491409 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.492249 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:17:41.950501504 +0000 UTC Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.491501 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:54 crc kubenswrapper[4835]: E0129 15:29:54.492313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:54 crc kubenswrapper[4835]: E0129 15:29:54.492359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:54 crc kubenswrapper[4835]: E0129 15:29:54.492037 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.491443 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:54 crc kubenswrapper[4835]: E0129 15:29:54.492504 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.544900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.544976 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.544992 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.545013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.545029 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.647438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.647792 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.647888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.647943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.647968 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.751313 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.751370 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.751379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.751393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.751402 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.854742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.854798 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.854811 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.854833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.854849 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.957186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.957229 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.957263 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.957283 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:54 crc kubenswrapper[4835]: I0129 15:29:54.957295 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:54Z","lastTransitionTime":"2026-01-29T15:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.059918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.059975 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.059987 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.060004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.060016 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.162786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.162845 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.162861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.162884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.162900 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.265481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.265529 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.265540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.265558 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.265569 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.367866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.367900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.367913 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.367928 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.367938 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.470499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.470553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.470566 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.470584 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.470597 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.492445 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:29:55 crc kubenswrapper[4835]: E0129 15:29:55.492615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.492654 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:01:38.451203994 +0000 UTC Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.572615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.572652 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.572660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.572676 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.572688 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.676301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.676351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.676363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.676382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.676394 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.779129 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.779168 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.779179 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.779192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.779201 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.882487 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.882528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.882539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.882553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.882562 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.985315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.985351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.985363 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.985379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:55 crc kubenswrapper[4835]: I0129 15:29:55.985391 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:55Z","lastTransitionTime":"2026-01-29T15:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.088394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.088459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.088498 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.088512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.088521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.191097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.191128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.191135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.191148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.191156 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.294787 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.294864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.294889 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.294919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.294942 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.398597 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.398844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.398924 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.398999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.399083 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.450728 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.450793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.450809 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.450830 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.450850 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.473198 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.479019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.479077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.479101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.479130 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.479156 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.492090 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.492253 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.492339 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.492631 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.492684 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.492770 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.492831 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.492879 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.492912 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:36:22.607007243 +0000 UTC Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.500431 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.506140 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.506186 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.506233 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.506256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.506274 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.526641 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.532463 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.532552 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.532570 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.532595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.532613 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.558173 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.564291 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.564343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.564360 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.564384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.564402 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.584448 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:56Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.584845 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.586746 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.586797 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.586819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.586844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.586861 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.642432 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.642675 4835 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:56 crc kubenswrapper[4835]: E0129 15:29:56.642767 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs podName:96bbc399-9838-4fd5-bf1e-66db67707c5c nodeName:}" failed. No retries permitted until 2026-01-29 15:31:00.642737968 +0000 UTC m=+162.835781652 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs") pod "network-metrics-daemon-p2hj4" (UID: "96bbc399-9838-4fd5-bf1e-66db67707c5c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.690217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.690276 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.690298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.690326 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.690347 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.794148 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.794211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.794228 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.794252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.794269 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.896927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.896979 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.896991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.897008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.897022 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.999375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.999438 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.999447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.999490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:56 crc kubenswrapper[4835]: I0129 15:29:56.999507 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:56Z","lastTransitionTime":"2026-01-29T15:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.102165 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.102221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.102240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.102265 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.102295 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.205872 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.205923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.205943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.205965 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.205981 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.309237 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.309287 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.309304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.309324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.309339 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.412054 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.412106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.412123 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.412147 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.412169 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.493399 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:01:54.256771546 +0000 UTC Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.515191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.515254 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.515272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.515299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.515319 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.618998 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.619063 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.619081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.619106 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.619126 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.721727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.721779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.721794 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.721812 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.721824 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.824251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.824311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.824329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.824390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.824412 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.927321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.927389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.927406 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.927433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:57 crc kubenswrapper[4835]: I0129 15:29:57.927455 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:57Z","lastTransitionTime":"2026-01-29T15:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.031174 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.031610 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.032002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.032201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.032394 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.135616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.135674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.135693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.135716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.135733 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.238969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.239012 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.239026 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.239044 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.239058 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.342072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.342441 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.342650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.342824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.342993 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.445674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.445712 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.445724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.445742 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.445754 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.491016 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.491087 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.491139 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.491143 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:29:58 crc kubenswrapper[4835]: E0129 15:29:58.491344 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:29:58 crc kubenswrapper[4835]: E0129 15:29:58.491627 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:29:58 crc kubenswrapper[4835]: E0129 15:29:58.491767 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:29:58 crc kubenswrapper[4835]: E0129 15:29:58.491874 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.494140 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 06:43:30.441103508 +0000 UTC Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.514088 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.532000 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2qgg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbf2aeec-7417-429a-8f23-3f341546d92f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9abbfb5f62213fa4b8c97c644be9e5ecb57bd9e1038854b270c649879c3ad94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9r6mx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2qgg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.549391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.549445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.549461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.549504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.549522 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.564141 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06588e6b-ac15-40f5-ae5d-03f95146dc91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:39Z\\\",\\\"message\\\":\\\"handler 2\\\\nI0129 15:29:39.527741 6857 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.528810 6857 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529113 6857 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529148 6857 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529204 6857 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 15:29:39.529923 6857 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 15:29:39.529972 6857 factory.go:656] Stopping watch factory\\\\nI0129 15:29:39.530001 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 15:29:39.550846 6857 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 15:29:39.550887 6857 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 15:29:39.550971 6857 ovnkube.go:599] Stopped ovnkube\\\\nI0129 15:29:39.551014 6857 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 15:29:39.551141 6857 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:29:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkhxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-5j9fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.586426 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3639f6a8-5b4f-42af-9a6d-edc05634e6af\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe22735966e311bc6535bba6d2addd992a4ce7f2308e30bd96acfcb086099874\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9b8f6aae3e22aa3b8f3a2a907f4c3d77be687c87cc89b16a3d76e1dffd1521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.622762 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a71f843-99a9-4d45-9b62-7ebf53dd8c80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fcc460915a986054d118cb496e9ea3ad977a9209d9b62fe65137cdcf345a0d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8da3c8ae0abcbe7374d10b44ab0974c8314601120490e4ef4578c7a7e4823fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85d0092f1bdfa7c96f2c21c52f865b7908ce08824ed191b1d638a2f1a4523d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://169aed8a585b1cd69db92a848922181040718f9428f2eb26e12ecd56671b5dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915cdd39a7834aaa39adff7177c09ea6fc4a13b3453bf704f48e0d6ed0534ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://632f902238c98aa2b59bb40ed3e1bd1bfa29bd30eff6b16e7ddffac3a0f9f4fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d22b0649b9cc34229daf404ebe02fce784909b5ff365e52116f2326f21a0db1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d56b946def241d0472672e2baaa9c2c472204751f6afda9463cac6bd0ab5461f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.642134 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f9d724cf-a117-4c18-8750-30432822b5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f7517f8e0400d8da3402f7b7358a44f89470efb3d65369ca8cc83aee76446e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1dd1b464e192be235606992983d4bb6c02b46004e1e1778e1df15e1aa7d36c46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e9cafdc1e28f3a5adb0218371cc9e6847ee8371c1381355a0f3416b89c3dae2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.655773 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.655840 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.655855 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.655873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.655884 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.660505 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-57czq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49ee8e60-d5ca-491f-bb19-83286a79d337\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78a78b49346e54f4efa881fb83c5527756e045c1638e874e2684292d0933206c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5c928f0f9859005608fb0b77d4fc53f6d7b34476bdcecc82161fd5cb1f1127\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7307cf56cd1dcfd27db3121ce3d4ea07f9f63602582bd9238625080b26ae3065\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f00a36a3801237cedb7e57805e752f13b8a41d0e5664e14297dbd9fc4e374583\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20347d1a1d7cc91aa54ea8c03c306fd0cb4acbd8bf5204039dff4953f8a3ef3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d822d457a24c6975d33f8227501fea802dbc1b70353e80b4d71955ff45345d16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c454a17acf73a5fdec18c213cbc69234400131f1bb06d13247d70147be4d374\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gx82j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-57czq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.676158 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42364f95-2afe-46fc-8528-19b8dfcac6ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9732bf8a8dbc2e1243f9da9f37ee10cf6481d33dbc052fcc4f81dea29dde50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d5350cd60651c78cd0c63269642223850b166d301fa57ae97548b147c777699\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dws6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pmdjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.692102 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fpjxf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"559426b6-ff0a-4f7a-a9e1-de4c69ef5d34\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://990e67acaf184a785fccf469227e238921934899d26781c66bfa62c1f7f58bd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sdz77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fpjxf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.708332 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09e517d2-4b09-4dd3-a63f-fc29e6a046c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://098dd6a7521dc5048b7d7442aefa2900d06bfc1bd8910239df0c5e40b907a4cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c37fc7bb86991b48c7459466b6598be6c28447c2f8ac8b08698758cfba125b9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6d0a731231db6e7e27bd7c68c5de4ae54a1381212ee0c18e49da6056ae9bfd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d01b62358c812f03269cfffe86c4f470c028608c2cc45d8cea5ed9242706be33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.727871 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.746237 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.758762 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.758813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.758824 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.758838 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.758848 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.765016 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:42Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30c7ebd4ebf659289bd5df9b82738cda3adb3ad945eee2c694bca768f8d70f22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.784163 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6sstt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9537ec35-484e-4eeb-b081-e453750fb39e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:29:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T15:29:26Z\\\",\\\"message\\\":\\\"2026-01-29T15:28:40+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c\\\\n2026-01-29T15:28:40+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_778dd7a4-8e8d-4b3e-94a0-ea77d5786b4c to /host/opt/cni/bin/\\\\n2026-01-29T15:28:41Z [verbose] multus-daemon started\\\\n2026-01-29T15:28:41Z [verbose] Readiness Indicator file check\\\\n2026-01-29T15:29:26Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:29:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4rpbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:39Z\\\"}}\" for pod \"openshift-multus\"/\"multus-6sstt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.804692 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.824427 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.843672 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.862622 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.862750 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.862771 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.862799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.862817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.863648 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.882371 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:29:58Z is after 2025-08-24T17:21:41Z" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.965732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.966863 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.967011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.967138 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:58 crc kubenswrapper[4835]: I0129 15:29:58.967268 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:58Z","lastTransitionTime":"2026-01-29T15:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.069502 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.069562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.069580 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.069604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.069622 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.172714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.173079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.173217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.173340 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.173440 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.277770 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.278294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.278559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.278763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.278915 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.381740 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.381807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.381825 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.381851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.381869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.485450 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.485520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.485534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.485551 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.485562 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.494539 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:10:42.160098235 +0000 UTC Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.589081 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.589196 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.589222 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.589252 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.589275 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.692114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.692163 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.692175 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.692193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.692205 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.795576 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.795651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.795674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.795701 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.795722 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.898921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.898973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.898989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.899013 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:29:59 crc kubenswrapper[4835]: I0129 15:29:59.899031 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:29:59Z","lastTransitionTime":"2026-01-29T15:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.002159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.002227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.002247 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.002275 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.002300 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.105253 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.105292 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.105300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.105315 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.105324 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.208416 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.208454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.208476 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.208490 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.208500 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.311124 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.311183 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.311201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.311226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.311243 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.414564 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.414599 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.414607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.414621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.414630 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.491446 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.491555 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:00 crc kubenswrapper[4835]: E0129 15:30:00.491656 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.491722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:00 crc kubenswrapper[4835]: E0129 15:30:00.491872 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.491928 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:00 crc kubenswrapper[4835]: E0129 15:30:00.492027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:00 crc kubenswrapper[4835]: E0129 15:30:00.492215 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.494661 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:33:10.11504879 +0000 UTC Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.516595 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.516650 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.516671 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.516697 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.516716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.619086 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.619144 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.619167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.619188 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.619203 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.722282 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.722324 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.722332 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.722346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.722357 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.825065 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.825102 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.825113 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.825128 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.825139 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.928690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.928763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.928799 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.928828 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:00 crc kubenswrapper[4835]: I0129 15:30:00.928851 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:00Z","lastTransitionTime":"2026-01-29T15:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.032173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.032226 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.032236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.032251 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.032264 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.134833 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.134893 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.134911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.134935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.134951 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.237656 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.237730 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.237739 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.237753 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.237762 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.340146 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.340185 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.340193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.340206 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.340214 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.443390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.443458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.443523 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.443557 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.443581 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.494856 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:30:37.766018846 +0000 UTC Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.547366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.547431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.547449 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.547495 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.547513 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.651244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.651298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.651314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.651342 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.651379 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.754934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.754997 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.755018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.755043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.755060 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.858349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.858417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.858440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.858504 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.858529 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.962383 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.962432 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.962443 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.962460 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:01 crc kubenswrapper[4835]: I0129 15:30:01.962496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:01Z","lastTransitionTime":"2026-01-29T15:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.065349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.065429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.065447 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.065494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.065512 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.168988 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.169066 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.169090 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.169122 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.169146 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.272143 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.272192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.272203 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.272220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.272232 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.375259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.375346 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.375382 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.375413 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.375437 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.479304 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.479375 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.479395 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.479417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.479436 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.490969 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.491017 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.491054 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.491166 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:02 crc kubenswrapper[4835]: E0129 15:30:02.491160 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:02 crc kubenswrapper[4835]: E0129 15:30:02.491346 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:02 crc kubenswrapper[4835]: E0129 15:30:02.491489 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:02 crc kubenswrapper[4835]: E0129 15:30:02.491598 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.495240 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:22:14.944414018 +0000 UTC Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.583048 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.583091 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.583105 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.583120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.583133 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.685643 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.685714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.685729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.685747 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.685758 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.788704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.788766 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.788781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.788814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.788831 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.892851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.892927 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.892937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.892961 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.892973 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.996665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.996786 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.996815 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.996856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:02 crc kubenswrapper[4835]: I0129 15:30:02.996883 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:02Z","lastTransitionTime":"2026-01-29T15:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.099344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.099393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.099405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.099425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.099440 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.202116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.202173 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.202189 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.202211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.202227 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.304338 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.304526 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.304560 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.304645 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.304674 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.407393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.407461 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.407535 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.407565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.407583 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.496281 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:40:17.758943934 +0000 UTC Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.510171 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.510211 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.510222 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.510238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.510249 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.612307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.612367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.612377 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.612400 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.612415 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.715052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.715101 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.715114 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.715135 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.715148 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.818068 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.818120 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.818136 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.818177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.818189 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.921767 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.921836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.921857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.921880 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:03 crc kubenswrapper[4835]: I0129 15:30:03.921898 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:03Z","lastTransitionTime":"2026-01-29T15:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.024159 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.024193 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.024201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.024214 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.024223 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.126303 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.126343 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.126352 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.126366 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.126377 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.228376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.228420 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.228436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.228459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.228515 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.330793 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.330826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.330836 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.330850 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.330860 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.433344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.433385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.433396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.433411 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.433422 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.491928 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.491973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.491926 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:04 crc kubenswrapper[4835]: E0129 15:30:04.492066 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.491934 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:04 crc kubenswrapper[4835]: E0129 15:30:04.492163 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:04 crc kubenswrapper[4835]: E0129 15:30:04.492301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:04 crc kubenswrapper[4835]: E0129 15:30:04.492388 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.496824 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:05:50.041921099 +0000 UTC Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.535719 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.535748 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.535757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.535769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.535778 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.638002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.638059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.638070 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.638084 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.638109 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.740002 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.740040 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.740051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.740064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.740074 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.842408 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.842446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.842457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.842494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.842505 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.944459 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.944518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.944530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.944546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:04 crc kubenswrapper[4835]: I0129 15:30:04.944560 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:04Z","lastTransitionTime":"2026-01-29T15:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.046829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.046862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.046870 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.046883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.046891 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.149384 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.149428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.149436 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.149452 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.149460 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.251966 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.252003 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.252011 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.252025 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.252035 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.354001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.354034 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.354042 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.354057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.354066 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.457702 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.457754 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.457768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.457788 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.457804 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.497732 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:05:30.459376012 +0000 UTC Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.560321 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.560349 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.560357 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.560368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.560378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.662354 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.662387 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.662396 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.662410 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.662420 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.764837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.764865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.764873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.764884 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.764892 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.867433 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.867457 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.867510 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.867522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.867551 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.970359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.970412 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.970429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.970451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:05 crc kubenswrapper[4835]: I0129 15:30:05.970493 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:05Z","lastTransitionTime":"2026-01-29T15:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.073860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.073905 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.073921 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.073941 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.073957 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.177379 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.177428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.177446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.177522 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.177561 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.281236 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.281311 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.281330 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.281356 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.281375 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.383867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.383929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.383947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.383969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.383988 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.486951 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.487010 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.487027 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.487051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.487069 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.491685 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.491797 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.491831 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.492012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.492312 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.492602 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.492697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.493278 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.493777 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.494043 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.497842 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:13:23.527833329 +0000 UTC Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.589896 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.589974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.589993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.590018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.590035 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.692639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.692699 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.692715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.692738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.692756 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.729776 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.729826 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.729844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.729864 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.729880 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.750662 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.756266 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.756314 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.756333 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.756359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.756378 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.778408 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.783715 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.783768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.783785 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.783806 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.783856 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.803453 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.808994 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.809043 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.809059 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.809080 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.809095 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.831364 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.836924 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.836977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.836999 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.837020 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.837036 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.858227 4835 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T15:30:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"40fe619e-6036-48d9-81f7-6a2798eec2f7\\\",\\\"systemUUID\\\":\\\"4955aaf8-d9bd-4384-b65d-ab2349d3a649\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:06Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:06 crc kubenswrapper[4835]: E0129 15:30:06.858441 4835 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.860539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.860582 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.860604 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.860625 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.860640 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.964119 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.964166 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.964181 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.964201 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:06 crc kubenswrapper[4835]: I0129 15:30:06.964215 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:06Z","lastTransitionTime":"2026-01-29T15:30:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.066615 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.066673 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.066690 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.066714 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.066731 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.169359 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.169427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.169445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.169499 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.169517 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.273674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.273757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.273778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.273818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.273834 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.377244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.377294 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.377305 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.377323 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.377336 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.479842 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.479873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.479883 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.479898 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.479910 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.499003 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:04:51.396144798 +0000 UTC Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.582844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.582902 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.582919 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.582947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.582966 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.685881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.685955 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.685973 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.686001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.686031 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.788683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.788759 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.788778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.788802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.788821 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.891929 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.892006 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.892033 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.892064 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.892088 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.994789 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.994823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.994834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.994848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:07 crc kubenswrapper[4835]: I0129 15:30:07.994859 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:07Z","lastTransitionTime":"2026-01-29T15:30:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.097531 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.097578 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.097589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.097602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.097610 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.200972 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.201051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.201077 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.201218 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.201261 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.304674 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.304731 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.304751 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.304779 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.304798 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.408556 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.408657 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.408678 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.408703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.408716 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.491414 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.491542 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.491582 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:08 crc kubenswrapper[4835]: E0129 15:30:08.491930 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.492130 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:08 crc kubenswrapper[4835]: E0129 15:30:08.492567 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:08 crc kubenswrapper[4835]: E0129 15:30:08.493180 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:08 crc kubenswrapper[4835]: E0129 15:30:08.493546 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.499309 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:05:37.166581133 +0000 UTC Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.511672 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96bbc399-9838-4fd5-bf1e-66db67707c5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:52Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-p2hj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.513888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.513948 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.513974 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.514004 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.514028 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.530751 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e865a0c1-3c6b-42b8-bf83-fd54eaffd425\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T15:28:32Z\\\",\\\"message\\\":\\\"W0129 15:28:21.897101 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 15:28:21.898408 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769700501 cert, and key in /tmp/serving-cert-3680015849/serving-signer.crt, /tmp/serving-cert-3680015849/serving-signer.key\\\\nI0129 15:28:22.417680 1 observer_polling.go:159] Starting file observer\\\\nW0129 15:28:22.420532 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 15:28:22.420800 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 15:28:22.423338 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3680015849/tls.crt::/tmp/serving-cert-3680015849/tls.key\\\\\\\"\\\\nF0129 15:28:32.918699 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:21Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T15:28:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T15:28:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:18Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.546219 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9df22808e347366e04b0f64baa20658c2bee250a269d55834e26bb5ded725eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.563661 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3856d21d7ce9d9169b7ece4baf6b4025f33780e7864076a4846a68d18b1e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://568ba589b20b2d6e6bf5473875ecf9dfe7367b0a18fdd98ebf43b257babe0499\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.581083 4835 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82353ab9-3930-4794-8182-a1e0948778df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T15:28:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71ad9351c6e3fb0a79749553aa434a76c21cea9cf75136805f934ce641145c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T15:28:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7tsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T15:28:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gpcs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T15:30:08Z is after 2025-08-24T17:21:41Z" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.613600 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-s2qgg" podStartSLOduration=90.613580163 podStartE2EDuration="1m30.613580163s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.613201924 +0000 UTC m=+110.806245598" watchObservedRunningTime="2026-01-29 15:30:08.613580163 +0000 UTC m=+110.806623817" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.622037 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.622071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.622079 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.622097 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.622108 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.648153 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pmdjv" podStartSLOduration=90.648130891 podStartE2EDuration="1m30.648130891s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.6480877 +0000 UTC m=+110.841131394" watchObservedRunningTime="2026-01-29 15:30:08.648130891 +0000 UTC m=+110.841174555" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.692195 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=90.692176495 podStartE2EDuration="1m30.692176495s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.691265504 +0000 UTC m=+110.884309168" watchObservedRunningTime="2026-01-29 15:30:08.692176495 +0000 UTC m=+110.885220159" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.692376 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=45.692371009 podStartE2EDuration="45.692371009s" podCreationTimestamp="2026-01-29 15:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.657166786 +0000 UTC m=+110.850210440" watchObservedRunningTime="2026-01-29 15:30:08.692371009 +0000 UTC m=+110.885414663" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.705116 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.705098229 podStartE2EDuration="1m30.705098229s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.704154808 +0000 UTC m=+110.897198472" watchObservedRunningTime="2026-01-29 15:30:08.705098229 +0000 UTC m=+110.898141883" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.724373 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.724429 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.724446 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.724501 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.724521 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.728700 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-57czq" podStartSLOduration=90.728676486 podStartE2EDuration="1m30.728676486s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.727898599 +0000 UTC m=+110.920942253" watchObservedRunningTime="2026-01-29 15:30:08.728676486 +0000 UTC m=+110.921720170" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.742840 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6sstt" podStartSLOduration=90.742814899 podStartE2EDuration="1m30.742814899s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.742273236 +0000 UTC m=+110.935316890" watchObservedRunningTime="2026-01-29 15:30:08.742814899 +0000 UTC m=+110.935858573" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.767730 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fpjxf" podStartSLOduration=90.767710496 podStartE2EDuration="1m30.767710496s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.754816062 +0000 UTC m=+110.947859716" watchObservedRunningTime="2026-01-29 15:30:08.767710496 +0000 UTC m=+110.960754150" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.781000 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.780982989 podStartE2EDuration="57.780982989s" podCreationTimestamp="2026-01-29 15:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:08.76833699 +0000 UTC m=+110.961380644" watchObservedRunningTime="2026-01-29 15:30:08.780982989 +0000 UTC m=+110.974026643" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.826969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.827008 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.827016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.827029 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.827039 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.930300 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.930372 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.930391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.930418 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:08 crc kubenswrapper[4835]: I0129 15:30:08.930437 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:08Z","lastTransitionTime":"2026-01-29T15:30:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.032152 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.032221 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.032231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.032262 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.032271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.134156 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.134223 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.134240 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.134259 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.134271 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.236591 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.236617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.236629 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.236663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.236677 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.339220 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.339248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.339256 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.339268 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.339278 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.442103 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.442167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.442191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.442217 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.442235 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.500036 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:25:15.339299134 +0000 UTC Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.544718 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.544768 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.544782 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.544802 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.544816 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.647390 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.647427 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.647437 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.647453 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.647496 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.752018 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.752075 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.752092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.752116 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.752132 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.855818 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.855865 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.855877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.855895 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.855909 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.959528 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.959589 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.959605 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.959633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:09 crc kubenswrapper[4835]: I0129 15:30:09.959648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:09Z","lastTransitionTime":"2026-01-29T15:30:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.062207 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.062286 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.062299 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.062328 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.062343 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.165301 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.165362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.165376 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.165397 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.165414 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.274607 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.274653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.274666 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.274684 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.274696 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.378191 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.378269 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.378298 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.378334 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.378359 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.484649 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.484706 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.484716 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.484732 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.484747 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.492027 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.492112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:10 crc kubenswrapper[4835]: E0129 15:30:10.492333 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.492414 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:10 crc kubenswrapper[4835]: E0129 15:30:10.492539 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:10 crc kubenswrapper[4835]: E0129 15:30:10.492614 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.492823 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:10 crc kubenswrapper[4835]: E0129 15:30:10.492954 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.500787 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:24:46.960113948 +0000 UTC Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.587937 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.588005 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.588024 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.588051 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.588070 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.691539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.691600 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.691612 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.691634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.691648 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.794617 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.794660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.794670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.794687 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.794699 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.897934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.897979 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.897991 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.898007 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:10 crc kubenswrapper[4835]: I0129 15:30:10.898018 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:10Z","lastTransitionTime":"2026-01-29T15:30:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.000813 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.000952 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.001028 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.001108 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.001140 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.105596 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.105653 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.105663 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.105683 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.105695 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.208971 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.209057 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.209082 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.209117 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.209259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.312394 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.312500 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.312521 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.312553 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.312575 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.415942 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.416016 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.416038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.416071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.416092 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.501325 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 05:12:39.486559256 +0000 UTC Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.519319 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.519388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.519404 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.519431 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.519450 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.622620 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.622693 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.622708 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.622724 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.622737 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.726764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.726856 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.726886 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.726923 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.726947 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.831781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.831873 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.831911 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.831935 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.831949 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.934764 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.934829 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.934841 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.934866 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:11 crc kubenswrapper[4835]: I0129 15:30:11.934882 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:11Z","lastTransitionTime":"2026-01-29T15:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.037157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.037209 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.037219 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.037238 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.037266 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.140800 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.140899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.140925 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.140959 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.140980 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.244539 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.244639 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.244665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.244704 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.244736 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.347351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.348385 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.348456 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.348520 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.348549 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.452810 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.452881 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.452899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.452938 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.452961 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.491513 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.491546 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.491630 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.491657 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:12 crc kubenswrapper[4835]: E0129 15:30:12.491656 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:12 crc kubenswrapper[4835]: E0129 15:30:12.491739 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:12 crc kubenswrapper[4835]: E0129 15:30:12.491818 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:12 crc kubenswrapper[4835]: E0129 15:30:12.491884 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.501751 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:44:59.206433704 +0000 UTC Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.556224 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.556271 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.556284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.556307 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.556324 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.660411 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.660481 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.660491 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.660509 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.660534 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.763494 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.763577 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.763598 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.763618 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.763660 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.866900 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.866934 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.866947 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.866969 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.866987 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.968993 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.969041 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.969052 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.969072 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:12 crc kubenswrapper[4835]: I0129 15:30:12.969084 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:12Z","lastTransitionTime":"2026-01-29T15:30:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.074345 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.074451 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.079231 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.079648 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.079671 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.158791 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/1.log" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.160204 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/0.log" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.160525 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerDied","Data":"a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.160744 4835 scope.go:117] "RemoveContainer" containerID="93dde74e7133e925af430ba8c79a3046928d91c08dcb5f629ec94aca64f66739" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.160417 4835 generic.go:334] "Generic (PLEG): container finished" podID="9537ec35-484e-4eeb-b081-e453750fb39e" containerID="a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759" exitCode=1 Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.161649 4835 scope.go:117] "RemoveContainer" containerID="a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759" Jan 29 15:30:13 crc kubenswrapper[4835]: E0129 15:30:13.162079 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-6sstt_openshift-multus(9537ec35-484e-4eeb-b081-e453750fb39e)\"" pod="openshift-multus/multus-6sstt" podUID="9537ec35-484e-4eeb-b081-e453750fb39e" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.184272 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.184344 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.184362 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.184389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.184407 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.242068 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podStartSLOduration=95.242030465 podStartE2EDuration="1m35.242030465s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:13.241920743 +0000 UTC m=+115.434964407" watchObservedRunningTime="2026-01-29 15:30:13.242030465 +0000 UTC m=+115.435074119" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.288330 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.288417 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.288445 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.288513 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.288546 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.295735 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=95.295692548 podStartE2EDuration="1m35.295692548s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:13.294613294 +0000 UTC m=+115.487657008" watchObservedRunningTime="2026-01-29 15:30:13.295692548 +0000 UTC m=+115.488736252" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.391769 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.391821 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.391834 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.391857 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.391870 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.494665 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.494711 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.494726 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.494744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.494757 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.502326 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:22:10.627899476 +0000 UTC Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.598309 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.598367 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.598380 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.598398 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.598412 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.701651 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.701722 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.701738 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.701761 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.701776 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.805092 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.805389 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.805419 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.805454 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.805557 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.908614 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.908660 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.908672 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.908691 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:13 crc kubenswrapper[4835]: I0129 15:30:13.908703 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:13Z","lastTransitionTime":"2026-01-29T15:30:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.012329 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.012391 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.012405 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.012426 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.012439 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.115729 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.115801 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.115819 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.115849 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.115869 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.166289 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/1.log" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.220745 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.220823 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.220844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.220876 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.220896 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.324347 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.324425 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.324440 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.324483 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.324503 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.429071 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.429151 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.429177 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.429215 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.429242 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.491106 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.491124 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.491225 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.491274 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:14 crc kubenswrapper[4835]: E0129 15:30:14.491494 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:14 crc kubenswrapper[4835]: E0129 15:30:14.491686 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:14 crc kubenswrapper[4835]: E0129 15:30:14.492157 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:14 crc kubenswrapper[4835]: E0129 15:30:14.492308 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.503089 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:09:03.328841831 +0000 UTC Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.533157 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.533232 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.533250 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.533284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.533313 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.637534 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.637601 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.637616 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.637638 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.637652 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.742378 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.742458 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.742540 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.742574 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.742598 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.845720 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.845804 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.845871 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.846038 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.846066 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.950493 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.950537 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.950546 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.950562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:14 crc kubenswrapper[4835]: I0129 15:30:14.950572 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:14Z","lastTransitionTime":"2026-01-29T15:30:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.053680 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.053744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.053757 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.053778 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.053792 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.155744 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.155781 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.155791 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.155807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.155817 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.258244 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.258559 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.258633 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.258703 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.259406 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.361602 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.361634 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.361642 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.361654 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.361663 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.464197 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.464227 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.464235 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.464248 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.464259 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.503531 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:30:45.302111384 +0000 UTC Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.566844 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.566888 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.566899 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.566914 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.566925 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.669763 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.669795 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.669807 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.669822 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.669831 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.774009 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.774303 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.774609 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.774814 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.774989 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.877867 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.878399 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.878518 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.878621 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.878959 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.983297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.983350 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.983364 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.983388 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:15 crc kubenswrapper[4835]: I0129 15:30:15.983427 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:15Z","lastTransitionTime":"2026-01-29T15:30:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.086670 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.087167 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.087369 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.087646 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.087842 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.191851 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.192837 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.193087 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.193278 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.193452 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.296860 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.297192 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.297289 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.297381 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.297497 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.401127 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.401565 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.401727 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.401930 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.402081 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.490935 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.491049 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:16 crc kubenswrapper[4835]: E0129 15:30:16.491074 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.491129 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.491127 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:16 crc kubenswrapper[4835]: E0129 15:30:16.491257 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:16 crc kubenswrapper[4835]: E0129 15:30:16.491331 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:16 crc kubenswrapper[4835]: E0129 15:30:16.491393 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.504011 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:34:54.360549189 +0000 UTC Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.504862 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.504892 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.504903 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.504918 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.504930 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.607943 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.607989 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.608001 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.608019 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.608032 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.711204 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.711284 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.711297 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.711312 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.711321 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.814293 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.814351 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.814368 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.814393 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.814410 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.917428 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.917512 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.917530 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.917555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:16 crc kubenswrapper[4835]: I0129 15:30:16.917572 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:16Z","lastTransitionTime":"2026-01-29T15:30:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.019784 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.019848 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.019861 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.019877 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.019889 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:17Z","lastTransitionTime":"2026-01-29T15:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.053562 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.053879 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.053977 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.054067 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.054157 4835 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T15:30:17Z","lastTransitionTime":"2026-01-29T15:30:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.118198 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p"] Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.119820 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.122357 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.122626 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.125744 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.125745 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.272135 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38f886e9-11ec-465a-b465-b9d987505c0b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.272176 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38f886e9-11ec-465a-b465-b9d987505c0b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.272209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/38f886e9-11ec-465a-b465-b9d987505c0b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.272446 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/38f886e9-11ec-465a-b465-b9d987505c0b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.272632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38f886e9-11ec-465a-b465-b9d987505c0b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374067 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/38f886e9-11ec-465a-b465-b9d987505c0b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374185 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/38f886e9-11ec-465a-b465-b9d987505c0b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374261 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/38f886e9-11ec-465a-b465-b9d987505c0b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374400 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/38f886e9-11ec-465a-b465-b9d987505c0b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374456 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38f886e9-11ec-465a-b465-b9d987505c0b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374648 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38f886e9-11ec-465a-b465-b9d987505c0b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.374746 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38f886e9-11ec-465a-b465-b9d987505c0b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.376826 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/38f886e9-11ec-465a-b465-b9d987505c0b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.388518 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38f886e9-11ec-465a-b465-b9d987505c0b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.392906 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38f886e9-11ec-465a-b465-b9d987505c0b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lx62p\" (UID: \"38f886e9-11ec-465a-b465-b9d987505c0b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.445101 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.504347 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:38:52.171260016 +0000 UTC Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.504411 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 15:30:17 crc kubenswrapper[4835]: I0129 15:30:17.515881 4835 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.180153 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" event={"ID":"38f886e9-11ec-465a-b465-b9d987505c0b","Type":"ContainerStarted","Data":"f449adcca1ffb0a7c850e362037b68f0361660dd21e9592229a7022f81904308"} Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.180639 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" event={"ID":"38f886e9-11ec-465a-b465-b9d987505c0b","Type":"ContainerStarted","Data":"64d6f75255396936f60c7975e1b8eb23b14a34a5a2d07ff7e8d965362fe966f3"} Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.469175 4835 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.491990 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.492003 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.492060 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.492144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.494191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.494312 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.494600 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.494763 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:18 crc kubenswrapper[4835]: I0129 15:30:18.496157 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.496527 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-5j9fl_openshift-ovn-kubernetes(06588e6b-ac15-40f5-ae5d-03f95146dc91)\"" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" Jan 29 15:30:18 crc kubenswrapper[4835]: E0129 15:30:18.609405 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:30:20 crc kubenswrapper[4835]: I0129 15:30:20.491383 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:20 crc kubenswrapper[4835]: I0129 15:30:20.491390 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:20 crc kubenswrapper[4835]: I0129 15:30:20.491410 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:20 crc kubenswrapper[4835]: I0129 15:30:20.491651 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:20 crc kubenswrapper[4835]: E0129 15:30:20.491662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:20 crc kubenswrapper[4835]: E0129 15:30:20.491790 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:20 crc kubenswrapper[4835]: E0129 15:30:20.491881 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:20 crc kubenswrapper[4835]: E0129 15:30:20.491965 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:22 crc kubenswrapper[4835]: I0129 15:30:22.491561 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:22 crc kubenswrapper[4835]: I0129 15:30:22.491646 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:22 crc kubenswrapper[4835]: I0129 15:30:22.491614 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:22 crc kubenswrapper[4835]: I0129 15:30:22.491572 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:22 crc kubenswrapper[4835]: E0129 15:30:22.491820 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:22 crc kubenswrapper[4835]: E0129 15:30:22.492012 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:22 crc kubenswrapper[4835]: E0129 15:30:22.492063 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:22 crc kubenswrapper[4835]: E0129 15:30:22.492126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:23 crc kubenswrapper[4835]: I0129 15:30:23.491977 4835 scope.go:117] "RemoveContainer" containerID="a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759" Jan 29 15:30:23 crc kubenswrapper[4835]: I0129 15:30:23.518060 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lx62p" podStartSLOduration=105.517976289 podStartE2EDuration="1m45.517976289s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:18.203656501 +0000 UTC m=+120.396700185" watchObservedRunningTime="2026-01-29 15:30:23.517976289 +0000 UTC m=+125.711019943" Jan 29 15:30:23 crc kubenswrapper[4835]: E0129 15:30:23.611095 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:30:24 crc kubenswrapper[4835]: I0129 15:30:24.201870 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/1.log" Jan 29 15:30:24 crc kubenswrapper[4835]: I0129 15:30:24.201948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerStarted","Data":"45aa09a6978751ffa20695bfce6a11506e5ba3fac9f7b465cf9f981e178787a1"} Jan 29 15:30:24 crc kubenswrapper[4835]: I0129 15:30:24.491731 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:24 crc kubenswrapper[4835]: I0129 15:30:24.491761 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:24 crc kubenswrapper[4835]: E0129 15:30:24.491891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:24 crc kubenswrapper[4835]: I0129 15:30:24.491988 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:24 crc kubenswrapper[4835]: E0129 15:30:24.492022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:24 crc kubenswrapper[4835]: I0129 15:30:24.492027 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:24 crc kubenswrapper[4835]: E0129 15:30:24.492185 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:24 crc kubenswrapper[4835]: E0129 15:30:24.492323 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:26 crc kubenswrapper[4835]: I0129 15:30:26.491513 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:26 crc kubenswrapper[4835]: E0129 15:30:26.491902 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:26 crc kubenswrapper[4835]: I0129 15:30:26.491581 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:26 crc kubenswrapper[4835]: I0129 15:30:26.491558 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:26 crc kubenswrapper[4835]: E0129 15:30:26.491993 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:26 crc kubenswrapper[4835]: I0129 15:30:26.491602 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:26 crc kubenswrapper[4835]: E0129 15:30:26.492066 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:26 crc kubenswrapper[4835]: E0129 15:30:26.492162 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:28 crc kubenswrapper[4835]: I0129 15:30:28.491325 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:28 crc kubenswrapper[4835]: I0129 15:30:28.491498 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:28 crc kubenswrapper[4835]: I0129 15:30:28.491536 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:28 crc kubenswrapper[4835]: E0129 15:30:28.493062 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:28 crc kubenswrapper[4835]: I0129 15:30:28.493204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:28 crc kubenswrapper[4835]: E0129 15:30:28.493291 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:28 crc kubenswrapper[4835]: E0129 15:30:28.493543 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:28 crc kubenswrapper[4835]: E0129 15:30:28.494405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:28 crc kubenswrapper[4835]: E0129 15:30:28.612564 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:30:30 crc kubenswrapper[4835]: I0129 15:30:30.491531 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:30 crc kubenswrapper[4835]: E0129 15:30:30.491798 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:30 crc kubenswrapper[4835]: I0129 15:30:30.491812 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:30 crc kubenswrapper[4835]: I0129 15:30:30.491851 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:30 crc kubenswrapper[4835]: I0129 15:30:30.491917 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:30 crc kubenswrapper[4835]: E0129 15:30:30.491962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:30 crc kubenswrapper[4835]: E0129 15:30:30.492247 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:30 crc kubenswrapper[4835]: E0129 15:30:30.492369 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:32 crc kubenswrapper[4835]: I0129 15:30:32.491277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:32 crc kubenswrapper[4835]: I0129 15:30:32.491283 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:32 crc kubenswrapper[4835]: I0129 15:30:32.491311 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:32 crc kubenswrapper[4835]: I0129 15:30:32.491431 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:32 crc kubenswrapper[4835]: E0129 15:30:32.491667 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:32 crc kubenswrapper[4835]: E0129 15:30:32.491862 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:32 crc kubenswrapper[4835]: E0129 15:30:32.491988 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:32 crc kubenswrapper[4835]: E0129 15:30:32.492637 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:32 crc kubenswrapper[4835]: I0129 15:30:32.493211 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:30:33 crc kubenswrapper[4835]: I0129 15:30:33.241564 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/3.log" Jan 29 15:30:33 crc kubenswrapper[4835]: I0129 15:30:33.245935 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerStarted","Data":"576d4fd3b9f9db3b346f0a358de77bf2faf241d81532ec4587d184d496fceb5d"} Jan 29 15:30:33 crc kubenswrapper[4835]: I0129 15:30:33.246630 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:30:33 crc kubenswrapper[4835]: I0129 15:30:33.565153 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podStartSLOduration=115.565122737 podStartE2EDuration="1m55.565122737s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:33.28632394 +0000 UTC m=+135.479367614" watchObservedRunningTime="2026-01-29 15:30:33.565122737 +0000 UTC m=+135.758166391" Jan 29 15:30:33 crc kubenswrapper[4835]: I0129 15:30:33.566083 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p2hj4"] Jan 29 15:30:33 crc kubenswrapper[4835]: I0129 15:30:33.566225 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:33 crc kubenswrapper[4835]: E0129 15:30:33.566346 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:33 crc kubenswrapper[4835]: E0129 15:30:33.614448 4835 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:30:34 crc kubenswrapper[4835]: I0129 15:30:34.490862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:34 crc kubenswrapper[4835]: E0129 15:30:34.491267 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:34 crc kubenswrapper[4835]: I0129 15:30:34.491016 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:34 crc kubenswrapper[4835]: E0129 15:30:34.491325 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:34 crc kubenswrapper[4835]: I0129 15:30:34.490997 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:34 crc kubenswrapper[4835]: E0129 15:30:34.491370 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:35 crc kubenswrapper[4835]: I0129 15:30:35.491322 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:35 crc kubenswrapper[4835]: E0129 15:30:35.491632 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:36 crc kubenswrapper[4835]: I0129 15:30:36.491542 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:36 crc kubenswrapper[4835]: I0129 15:30:36.491650 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:36 crc kubenswrapper[4835]: E0129 15:30:36.491716 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:36 crc kubenswrapper[4835]: I0129 15:30:36.492017 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:36 crc kubenswrapper[4835]: E0129 15:30:36.492103 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:36 crc kubenswrapper[4835]: E0129 15:30:36.492140 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:37 crc kubenswrapper[4835]: I0129 15:30:37.491270 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:37 crc kubenswrapper[4835]: E0129 15:30:37.491421 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-p2hj4" podUID="96bbc399-9838-4fd5-bf1e-66db67707c5c" Jan 29 15:30:38 crc kubenswrapper[4835]: I0129 15:30:38.491590 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:38 crc kubenswrapper[4835]: E0129 15:30:38.492597 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 15:30:38 crc kubenswrapper[4835]: I0129 15:30:38.492655 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:38 crc kubenswrapper[4835]: I0129 15:30:38.492704 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:38 crc kubenswrapper[4835]: E0129 15:30:38.492920 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 15:30:38 crc kubenswrapper[4835]: E0129 15:30:38.492916 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 15:30:39 crc kubenswrapper[4835]: I0129 15:30:39.491229 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:30:39 crc kubenswrapper[4835]: I0129 15:30:39.493937 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:30:39 crc kubenswrapper[4835]: I0129 15:30:39.493956 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.491223 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.491264 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.491962 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.495365 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.496560 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.496783 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:30:40 crc kubenswrapper[4835]: I0129 15:30:40.497928 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.404552 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:46 crc kubenswrapper[4835]: E0129 15:30:46.404868 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:32:48.404800924 +0000 UTC m=+270.597844648 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.506542 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.506617 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.506662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.506701 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.507956 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.514093 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.514557 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.514777 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.514892 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.525551 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 15:30:46 crc kubenswrapper[4835]: I0129 15:30:46.534575 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 15:30:46 crc kubenswrapper[4835]: W0129 15:30:46.780791 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-1cc527cf3301fbb7e5a73196cd89478a49a86d2f473cc67d6712a35864125997 WatchSource:0}: Error finding container 1cc527cf3301fbb7e5a73196cd89478a49a86d2f473cc67d6712a35864125997: Status 404 returned error can't find the container with id 1cc527cf3301fbb7e5a73196cd89478a49a86d2f473cc67d6712a35864125997 Jan 29 15:30:46 crc kubenswrapper[4835]: W0129 15:30:46.781620 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ba008762f5612aa3f3b0fde151d884bcae056e2cfe1ba1b335d684334608c6a6 WatchSource:0}: Error finding container ba008762f5612aa3f3b0fde151d884bcae056e2cfe1ba1b335d684334608c6a6: Status 404 returned error can't find the container with id ba008762f5612aa3f3b0fde151d884bcae056e2cfe1ba1b335d684334608c6a6 Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.295727 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c4665d168f3ea8ae6c91bfbf65201f1a54f38510221adfe343885c8ce1643425"} Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.296302 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ba008762f5612aa3f3b0fde151d884bcae056e2cfe1ba1b335d684334608c6a6"} Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.297811 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2ef665f3fd07419d4396ad2e4ce210d635866a3210cc6caca7812f3ec5c0a0fd"} Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.297838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a3a0511b70963164f6a75c56f18fd1dabf2b9d497bac7f2b11742f5e727771e7"} Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.300103 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"70b9fc729df9d9a6b23ed46b7b0118a9ca0b9815764ff5497b7943419ba36b83"} Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.300195 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1cc527cf3301fbb7e5a73196cd89478a49a86d2f473cc67d6712a35864125997"} Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.300450 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.769555 4835 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.805892 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxtmr"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.806518 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.815905 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.816567 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.816912 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.825327 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.825413 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-serving-cert\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.825459 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-encryption-config\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.825529 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-node-pullsecrets\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.825768 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-audit\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.825926 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-etcd-client\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.826009 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jsps\" (UniqueName: \"kubernetes.io/projected/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-kube-api-access-7jsps\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.826412 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-config\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.826540 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.826582 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-audit-dir\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.826758 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-image-import-ca\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.828174 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.829978 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.830842 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.842531 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.842721 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.842931 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.843063 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.843253 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.844296 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.846979 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.847160 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.847310 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.847542 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.847739 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.847768 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dcp7n"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.847918 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.848239 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.848385 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.849010 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.849374 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.849413 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.849713 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.849824 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.849936 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.850062 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.850180 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.850297 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.850957 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.851432 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.854492 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.854959 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.858393 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.858933 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.859396 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-88xw8"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.859444 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.859416 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.861664 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.864248 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.864791 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.864917 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865098 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865260 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865428 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865557 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865648 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865670 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865722 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865765 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.865866 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.867514 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wrghl"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868265 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868400 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868620 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868649 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868770 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.868919 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.869752 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.869881 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.869993 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nllch"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.870018 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.870285 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.870534 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.870691 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.871564 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zskhr"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.872235 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.872320 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.873441 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g8qkk"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.874123 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.874261 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.874728 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.874965 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.875145 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.875254 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.875383 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.875591 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.875872 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.876619 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.877023 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8cwqh"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.877384 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.877445 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cbsh8"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878053 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878176 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878453 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878506 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878586 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878596 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878617 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878722 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878745 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878752 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878511 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878806 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878852 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.878855 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.879399 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.879907 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2v9n5"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.880743 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.880895 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.887691 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.897381 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.897439 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.898019 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.904185 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.905578 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.911901 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.912352 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.932365 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.933552 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.933687 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.933787 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.933874 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.933963 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.934058 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.934156 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.934237 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.935949 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.936066 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.936178 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.936279 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.936394 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.936508 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.937212 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t66v2"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.937621 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.938219 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.938299 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.938378 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2s88\" (UniqueName: \"kubernetes.io/projected/a6394550-97b4-49a5-b726-602943985ef8-kube-api-access-c2s88\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.938423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-etcd-client\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.938594 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.939346 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.939518 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.939965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-encryption-config\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940005 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940241 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-audit-policies\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940335 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-image-import-ca\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940488 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de99438b-f023-4f1f-97e0-25dcebd1b0bf-serving-cert\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940528 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5848904-e808-42fe-a7d6-1710f0dc09a2-metrics-tls\") pod \"dns-operator-744455d44c-8cwqh\" (UID: \"c5848904-e808-42fe-a7d6-1710f0dc09a2\") " pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940559 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6394550-97b4-49a5-b726-602943985ef8-audit-dir\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940591 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-node-pullsecrets\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940622 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-serving-cert\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940648 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jsps\" (UniqueName: \"kubernetes.io/projected/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-kube-api-access-7jsps\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940670 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940734 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwqhw\" (UniqueName: \"kubernetes.io/projected/c5848904-e808-42fe-a7d6-1710f0dc09a2-kube-api-access-qwqhw\") pod \"dns-operator-744455d44c-8cwqh\" (UID: \"c5848904-e808-42fe-a7d6-1710f0dc09a2\") " pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940733 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-node-pullsecrets\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1a6ef06-ee22-4874-a6be-cd8300afc95f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mdbvm\" (UID: \"c1a6ef06-ee22-4874-a6be-cd8300afc95f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940833 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940862 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-audit-dir\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940892 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de99438b-f023-4f1f-97e0-25dcebd1b0bf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940915 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.940952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-serving-cert\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944384 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-encryption-config\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944530 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-audit\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944567 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-etcd-client\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944607 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-config\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm59z\" (UniqueName: \"kubernetes.io/projected/c1a6ef06-ee22-4874-a6be-cd8300afc95f-kube-api-access-bm59z\") pod \"cluster-samples-operator-665b6dd947-mdbvm\" (UID: \"c1a6ef06-ee22-4874-a6be-cd8300afc95f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.941307 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944662 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tsx\" (UniqueName: \"kubernetes.io/projected/de99438b-f023-4f1f-97e0-25dcebd1b0bf-kube-api-access-b6tsx\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.941370 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-audit-dir\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.941761 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.942725 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.942803 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.943926 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.944564 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.945791 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-audit\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.943297 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.941966 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-image-import-ca\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.947723 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.948811 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-config\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.950922 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.952996 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.952235 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-etcd-client\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.954257 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.956359 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.956639 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.956757 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8qfs"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.957116 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.957304 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.958530 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z6rg2"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.959737 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.962195 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.962167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-encryption-config\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.962771 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.962934 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.962985 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfwcj"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.963539 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.964458 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.964815 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.964965 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-serving-cert\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.965960 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.985139 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.985304 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxtmr"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.985365 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.985871 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.987289 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.987415 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5"] Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.987535 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.990602 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:30:47 crc kubenswrapper[4835]: I0129 15:30:47.991648 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.001693 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.003611 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.007930 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.008725 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.008727 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zqtcm"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.010582 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.010981 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.011022 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.011672 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.011983 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.012132 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.012361 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.012951 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.017951 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.019020 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.019327 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.019927 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.021002 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.024560 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dcp7n"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.024603 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.025117 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.025171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.027629 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.030278 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t66v2"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.030835 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.033590 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g8qkk"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.036080 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.037301 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z6rg2"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.039812 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.041550 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8cwqh"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.043342 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046230 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de99438b-f023-4f1f-97e0-25dcebd1b0bf-serving-cert\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0f506f-1e2f-4824-b95d-e97abd907131-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046307 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5848904-e808-42fe-a7d6-1710f0dc09a2-metrics-tls\") pod \"dns-operator-744455d44c-8cwqh\" (UID: \"c5848904-e808-42fe-a7d6-1710f0dc09a2\") " pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046333 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6394550-97b4-49a5-b726-602943985ef8-audit-dir\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046357 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd0f506f-1e2f-4824-b95d-e97abd907131-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046384 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-serving-cert\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046412 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046431 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwqhw\" (UniqueName: \"kubernetes.io/projected/c5848904-e808-42fe-a7d6-1710f0dc09a2-kube-api-access-qwqhw\") pod \"dns-operator-744455d44c-8cwqh\" (UID: \"c5848904-e808-42fe-a7d6-1710f0dc09a2\") " pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1a6ef06-ee22-4874-a6be-cd8300afc95f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mdbvm\" (UID: \"c1a6ef06-ee22-4874-a6be-cd8300afc95f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046510 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wrghl"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9ndp\" (UniqueName: \"kubernetes.io/projected/d9ede1dc-ad19-4261-97f2-605dd20383c1-kube-api-access-c9ndp\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046575 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f0be43-3630-48bc-988f-22965b012136-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de99438b-f023-4f1f-97e0-25dcebd1b0bf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046614 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a6394550-97b4-49a5-b726-602943985ef8-audit-dir\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046621 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ede1dc-ad19-4261-97f2-605dd20383c1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046721 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tsx\" (UniqueName: \"kubernetes.io/projected/de99438b-f023-4f1f-97e0-25dcebd1b0bf-kube-api-access-b6tsx\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046766 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm59z\" (UniqueName: \"kubernetes.io/projected/c1a6ef06-ee22-4874-a6be-cd8300afc95f-kube-api-access-bm59z\") pod \"cluster-samples-operator-665b6dd947-mdbvm\" (UID: \"c1a6ef06-ee22-4874-a6be-cd8300afc95f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046818 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2s88\" (UniqueName: \"kubernetes.io/projected/a6394550-97b4-49a5-b726-602943985ef8-kube-api-access-c2s88\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.046843 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52t7x\" (UniqueName: \"kubernetes.io/projected/fd0f506f-1e2f-4824-b95d-e97abd907131-kube-api-access-52t7x\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.047277 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nllch"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.047372 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-etcd-client\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048699 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048800 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f0be43-3630-48bc-988f-22965b012136-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048831 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ede1dc-ad19-4261-97f2-605dd20383c1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048890 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-encryption-config\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048938 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0be43-3630-48bc-988f-22965b012136-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de99438b-f023-4f1f-97e0-25dcebd1b0bf-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048966 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.048984 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd0f506f-1e2f-4824-b95d-e97abd907131-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.049013 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-audit-policies\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.049441 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-88xw8"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.049926 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.051190 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a6394550-97b4-49a5-b726-602943985ef8-audit-policies\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.052757 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.053929 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-encryption-config\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054024 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c5848904-e808-42fe-a7d6-1710f0dc09a2-metrics-tls\") pod \"dns-operator-744455d44c-8cwqh\" (UID: \"c5848904-e808-42fe-a7d6-1710f0dc09a2\") " pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054073 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054119 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-serving-cert\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054126 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a6394550-97b4-49a5-b726-602943985ef8-etcd-client\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054216 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1a6ef06-ee22-4874-a6be-cd8300afc95f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mdbvm\" (UID: \"c1a6ef06-ee22-4874-a6be-cd8300afc95f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054289 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.054897 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de99438b-f023-4f1f-97e0-25dcebd1b0bf-serving-cert\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.055051 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4ds2w"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.056101 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.056188 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.057499 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.058907 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.060585 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.061641 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jgjrx"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.062158 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.063089 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gx7b9"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.064484 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8qfs"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.064725 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.066338 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cbsh8"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.067069 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.068455 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.069059 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.072094 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfwcj"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.076261 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zskhr"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.078216 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.101118 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.104313 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.110990 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.118055 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4ds2w"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.118115 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.119954 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zqtcm"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.123195 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.124883 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.125529 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.128063 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.128693 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.130717 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gx7b9"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.131519 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k2mp9"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.133784 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k2mp9"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.133998 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.148434 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.149889 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f0be43-3630-48bc-988f-22965b012136-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.149939 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ede1dc-ad19-4261-97f2-605dd20383c1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.149997 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52t7x\" (UniqueName: \"kubernetes.io/projected/fd0f506f-1e2f-4824-b95d-e97abd907131-kube-api-access-52t7x\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150028 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f0be43-3630-48bc-988f-22965b012136-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150051 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ede1dc-ad19-4261-97f2-605dd20383c1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150076 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0be43-3630-48bc-988f-22965b012136-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150105 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd0f506f-1e2f-4824-b95d-e97abd907131-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0f506f-1e2f-4824-b95d-e97abd907131-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd0f506f-1e2f-4824-b95d-e97abd907131-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.150212 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9ndp\" (UniqueName: \"kubernetes.io/projected/d9ede1dc-ad19-4261-97f2-605dd20383c1-kube-api-access-c9ndp\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.151449 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9ede1dc-ad19-4261-97f2-605dd20383c1-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.151533 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd0f506f-1e2f-4824-b95d-e97abd907131-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.153161 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd0f506f-1e2f-4824-b95d-e97abd907131-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.153590 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9ede1dc-ad19-4261-97f2-605dd20383c1-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.168605 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.190225 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.208517 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.229244 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.250009 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.268442 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.289909 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.308851 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.328930 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.348789 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.369671 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.391098 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.395193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f0be43-3630-48bc-988f-22965b012136-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.409244 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.411395 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f0be43-3630-48bc-988f-22965b012136-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.429878 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.449567 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.477040 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.488974 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.509100 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.530090 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.582332 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jsps\" (UniqueName: \"kubernetes.io/projected/a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb-kube-api-access-7jsps\") pod \"apiserver-76f77b778f-jxtmr\" (UID: \"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb\") " pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.608562 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.629349 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.649858 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.668975 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.689849 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.711751 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.728548 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.749173 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.756839 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.769696 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.792817 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.808861 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.831077 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.849457 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.869348 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.888941 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.909669 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.930307 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.949540 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.958722 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxtmr"] Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.966936 4835 request.go:700] Waited for 1.002172839s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&limit=500&resourceVersion=0 Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.973083 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:30:48 crc kubenswrapper[4835]: I0129 15:30:48.989697 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.009553 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.028683 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.053908 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.069208 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.089004 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.110044 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.129057 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.149230 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.168718 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.188327 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.209429 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.229509 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.249063 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.269286 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.288876 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.309079 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.309791 4835 generic.go:334] "Generic (PLEG): container finished" podID="a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb" containerID="a1f582546ce0ff517769762ade2a454c164f6fd2a23c80ecf9a8772ef2d826b2" exitCode=0 Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.309835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" event={"ID":"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb","Type":"ContainerDied","Data":"a1f582546ce0ff517769762ade2a454c164f6fd2a23c80ecf9a8772ef2d826b2"} Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.309863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" event={"ID":"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb","Type":"ContainerStarted","Data":"8da61eb6064c34a6e1ce9541bde25003add997380f91a16759ee094f9d835366"} Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.329105 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.349005 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.370546 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.390299 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.409329 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.439684 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.449952 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.469273 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.489538 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.508727 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.528580 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.549023 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.569768 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.589257 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.609804 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.650076 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tsx\" (UniqueName: \"kubernetes.io/projected/de99438b-f023-4f1f-97e0-25dcebd1b0bf-kube-api-access-b6tsx\") pod \"openshift-config-operator-7777fb866f-jtl2c\" (UID: \"de99438b-f023-4f1f-97e0-25dcebd1b0bf\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.668175 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2s88\" (UniqueName: \"kubernetes.io/projected/a6394550-97b4-49a5-b726-602943985ef8-kube-api-access-c2s88\") pod \"apiserver-7bbb656c7d-7qx5k\" (UID: \"a6394550-97b4-49a5-b726-602943985ef8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.673747 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.694522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm59z\" (UniqueName: \"kubernetes.io/projected/c1a6ef06-ee22-4874-a6be-cd8300afc95f-kube-api-access-bm59z\") pod \"cluster-samples-operator-665b6dd947-mdbvm\" (UID: \"c1a6ef06-ee22-4874-a6be-cd8300afc95f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.705018 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwqhw\" (UniqueName: \"kubernetes.io/projected/c5848904-e808-42fe-a7d6-1710f0dc09a2-kube-api-access-qwqhw\") pod \"dns-operator-744455d44c-8cwqh\" (UID: \"c5848904-e808-42fe-a7d6-1710f0dc09a2\") " pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.710003 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.729437 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.749352 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.769139 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.789513 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.809919 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.826683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.829126 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.849894 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.855884 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k"] Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.869664 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.892631 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.911928 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.929175 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.949065 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.949128 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.967221 4835 request.go:700] Waited for 1.816907869s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.975540 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" Jan 29 15:30:49 crc kubenswrapper[4835]: I0129 15:30:49.988724 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7f0be43-3630-48bc-988f-22965b012136-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bx774\" (UID: \"a7f0be43-3630-48bc-988f-22965b012136\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.005351 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52t7x\" (UniqueName: \"kubernetes.io/projected/fd0f506f-1e2f-4824-b95d-e97abd907131-kube-api-access-52t7x\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.023886 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9ndp\" (UniqueName: \"kubernetes.io/projected/d9ede1dc-ad19-4261-97f2-605dd20383c1-kube-api-access-c9ndp\") pod \"openshift-controller-manager-operator-756b6f6bc6-lfc8j\" (UID: \"d9ede1dc-ad19-4261-97f2-605dd20383c1\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.048576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fd0f506f-1e2f-4824-b95d-e97abd907131-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vxp4g\" (UID: \"fd0f506f-1e2f-4824-b95d-e97abd907131\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.061643 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b49fcb5-205f-42cc-92a5-eb8190c06f15-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073646 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5pg\" (UniqueName: \"kubernetes.io/projected/121ed143-6a9d-4b89-b328-e0b820a65fa1-kube-api-access-2r5pg\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073709 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d139b8f8-a2b9-499f-a656-fc524a69d712-config\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-bound-sa-token\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073923 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1baac430-a512-49db-b763-45522a1db0b5-auth-proxy-config\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.073996 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-stats-auth\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074029 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b565a80-c263-443c-8e9e-eab07a0692ef-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074074 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24c4f50-5254-4170-a11d-b8cf49841066-config\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074109 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzsw\" (UniqueName: \"kubernetes.io/projected/5b565a80-c263-443c-8e9e-eab07a0692ef-kube-api-access-klzsw\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074185 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1baac430-a512-49db-b763-45522a1db0b5-machine-approver-tls\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074216 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-default-certificate\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074258 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gvg\" (UniqueName: \"kubernetes.io/projected/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-kube-api-access-d8gvg\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074395 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-oauth-serving-cert\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074426 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hl5\" (UniqueName: \"kubernetes.io/projected/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-kube-api-access-78hl5\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b565a80-c263-443c-8e9e-eab07a0692ef-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074565 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4dc8416-2dc7-4e04-8228-3f90015ecbda-service-ca-bundle\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074611 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-dir\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s772\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-kube-api-access-4s772\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074702 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-config\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074851 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-config\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074885 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d139b8f8-a2b9-499f-a656-fc524a69d712-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074911 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzx4s\" (UniqueName: \"kubernetes.io/projected/d4dc8416-2dc7-4e04-8228-3f90015ecbda-kube-api-access-lzx4s\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074941 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f567caf-8d53-4977-b270-da4a3c50a910-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.074969 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f24c4f50-5254-4170-a11d-b8cf49841066-images\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075008 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075036 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764gs\" (UniqueName: \"kubernetes.io/projected/1baac430-a512-49db-b763-45522a1db0b5-kube-api-access-764gs\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075064 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ae2d8fa-6975-4490-817e-feab685f5f01-trusted-ca\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075091 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-oauth-config\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075168 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1baac430-a512-49db-b763-45522a1db0b5-config\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075201 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-service-ca-bundle\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075237 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-service-ca\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075260 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae2d8fa-6975-4490-817e-feab685f5f01-serving-cert\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075283 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2j58\" (UniqueName: \"kubernetes.io/projected/1e97f198-d45d-4071-a2b9-f41b632caf2f-kube-api-access-s2j58\") pod \"downloads-7954f5f757-dcp7n\" (UID: \"1e97f198-d45d-4071-a2b9-f41b632caf2f\") " pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075310 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-trusted-ca-bundle\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075332 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075362 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-certificates\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075388 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075412 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plv6v\" (UniqueName: \"kubernetes.io/projected/b51dbfee-ce3e-4c24-946c-f7481c5c286a-kube-api-access-plv6v\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075436 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f567caf-8d53-4977-b270-da4a3c50a910-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-trusted-ca\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075522 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075551 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-tls\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075597 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075631 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d139b8f8-a2b9-499f-a656-fc524a69d712-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075736 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhkb\" (UniqueName: \"kubernetes.io/projected/2f567caf-8d53-4977-b270-da4a3c50a910-kube-api-access-4rhkb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075780 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b49fcb5-205f-42cc-92a5-eb8190c06f15-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075820 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7w7\" (UniqueName: \"kubernetes.io/projected/0ae2d8fa-6975-4490-817e-feab685f5f01-kube-api-access-dj7w7\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24c4f50-5254-4170-a11d-b8cf49841066-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.075948 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-serving-cert\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076003 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-client-ca\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076110 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae2d8fa-6975-4490-817e-feab685f5f01-config\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.076144 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:50.576118936 +0000 UTC m=+152.769162600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076225 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-policies\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076290 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076315 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-metrics-certs\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076363 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121ed143-6a9d-4b89-b328-e0b820a65fa1-serving-cert\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-config\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.076496 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwxn9\" (UniqueName: \"kubernetes.io/projected/f24c4f50-5254-4170-a11d-b8cf49841066-kube-api-access-hwxn9\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.077714 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.077795 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.077869 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.077965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: W0129 15:30:50.100275 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde99438b_f023_4f1f_97e0_25dcebd1b0bf.slice/crio-3cee8e786e1d56536a873ec9e1984b17257689049bd43aa3a825208b68ad20d2 WatchSource:0}: Error finding container 3cee8e786e1d56536a873ec9e1984b17257689049bd43aa3a825208b68ad20d2: Status 404 returned error can't find the container with id 3cee8e786e1d56536a873ec9e1984b17257689049bd43aa3a825208b68ad20d2 Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.173581 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.180524 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.180779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwnhm\" (UniqueName: \"kubernetes.io/projected/919e8487-ff15-4a8f-a5e0-dca045e62ae2-kube-api-access-kwnhm\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.180818 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-config\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.180844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786342c-4f95-48d8-930d-ff9106d6b498-serving-cert\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.180868 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121ed143-6a9d-4b89-b328-e0b820a65fa1-serving-cert\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.180915 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/790fce65-a57d-4790-9772-954b8473cc10-proxy-tls\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.180966 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:50.680922218 +0000 UTC m=+152.873965872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-config\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181050 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwxn9\" (UniqueName: \"kubernetes.io/projected/f24c4f50-5254-4170-a11d-b8cf49841066-kube-api-access-hwxn9\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181096 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181117 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbs6r\" (UniqueName: \"kubernetes.io/projected/790fce65-a57d-4790-9772-954b8473cc10-kube-api-access-nbs6r\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181133 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f01ad59-6644-40b0-9a55-8677c520a74c-config\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181177 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b49fcb5-205f-42cc-92a5-eb8190c06f15-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181195 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r5pg\" (UniqueName: \"kubernetes.io/projected/121ed143-6a9d-4b89-b328-e0b820a65fa1-kube-api-access-2r5pg\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181212 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6jk\" (UniqueName: \"kubernetes.io/projected/0d909dc1-ec42-4a5a-afe0-def24b1342ab-kube-api-access-9k6jk\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181231 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbkh5\" (UniqueName: \"kubernetes.io/projected/19984771-6056-43f4-9d47-b90b4ad1f73a-kube-api-access-zbkh5\") pod \"multus-admission-controller-857f4d67dd-z6rg2\" (UID: \"19984771-6056-43f4-9d47-b90b4ad1f73a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181266 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181287 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919e8487-ff15-4a8f-a5e0-dca045e62ae2-secret-volume\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181310 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-service-ca\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181336 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b565a80-c263-443c-8e9e-eab07a0692ef-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181361 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klzsw\" (UniqueName: \"kubernetes.io/projected/5b565a80-c263-443c-8e9e-eab07a0692ef-kube-api-access-klzsw\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181385 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2718b5b9-bc27-48a2-8b90-5755195f6e1e-metrics-tls\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181409 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1baac430-a512-49db-b763-45522a1db0b5-machine-approver-tls\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181447 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gvg\" (UniqueName: \"kubernetes.io/projected/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-kube-api-access-d8gvg\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181493 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hl5\" (UniqueName: \"kubernetes.io/projected/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-kube-api-access-78hl5\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181576 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19984771-6056-43f4-9d47-b90b4ad1f73a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z6rg2\" (UID: \"19984771-6056-43f4-9d47-b90b4ad1f73a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181595 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-dir\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm89c\" (UniqueName: \"kubernetes.io/projected/724b0b6d-fe63-47f2-85dd-90d5accd7cf3-kube-api-access-sm89c\") pod \"control-plane-machine-set-operator-78cbb6b69f-8vm4c\" (UID: \"724b0b6d-fe63-47f2-85dd-90d5accd7cf3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181629 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cnf\" (UniqueName: \"kubernetes.io/projected/31485654-ce13-4b9b-a5b0-907dbd8a9e70-kube-api-access-t8cnf\") pod \"migrator-59844c95c7-6jqq5\" (UID: \"31485654-ce13-4b9b-a5b0-907dbd8a9e70\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3fd11507-99ee-4baf-8f4c-35ee40b8ed05-cert\") pod \"ingress-canary-4ds2w\" (UID: \"3fd11507-99ee-4baf-8f4c-35ee40b8ed05\") " pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181676 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/724b0b6d-fe63-47f2-85dd-90d5accd7cf3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8vm4c\" (UID: \"724b0b6d-fe63-47f2-85dd-90d5accd7cf3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181693 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-serving-cert\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181709 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/920c998c-9d29-4358-8607-a793e7873a17-certs\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181724 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-config-volume\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181748 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-ca\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181766 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqbz\" (UniqueName: \"kubernetes.io/projected/0d653843-6818-4b28-a3d8-53407876dfe8-kube-api-access-9cqbz\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181792 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181810 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764gs\" (UniqueName: \"kubernetes.io/projected/1baac430-a512-49db-b763-45522a1db0b5-kube-api-access-764gs\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.181827 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ae2d8fa-6975-4490-817e-feab685f5f01-trusted-ca\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.182549 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.182578 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-serving-cert\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.182645 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-oauth-config\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.182977 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:50.682967639 +0000 UTC m=+152.876011293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184041 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b565a80-c263-443c-8e9e-eab07a0692ef-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184104 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184445 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ae2d8fa-6975-4490-817e-feab685f5f01-trusted-ca\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184686 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b49fcb5-205f-42cc-92a5-eb8190c06f15-ca-trust-extracted\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184749 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-dir\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184943 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwb49\" (UniqueName: \"kubernetes.io/projected/7e59e961-fb45-4454-bc76-a17976c8bac7-kube-api-access-fwb49\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184993 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-plugins-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.185020 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-srv-cert\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.184947 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.185092 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1baac430-a512-49db-b763-45522a1db0b5-config\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.185125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-service-ca-bundle\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.185150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-service-ca\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.186214 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-config\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.186426 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-service-ca\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.187092 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121ed143-6a9d-4b89-b328-e0b820a65fa1-service-ca-bundle\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.187744 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.188006 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1baac430-a512-49db-b763-45522a1db0b5-config\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.188731 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121ed143-6a9d-4b89-b328-e0b820a65fa1-serving-cert\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.188921 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1baac430-a512-49db-b763-45522a1db0b5-machine-approver-tls\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.190889 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.192130 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.192494 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-oauth-config\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195342 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195428 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j58\" (UniqueName: \"kubernetes.io/projected/1e97f198-d45d-4071-a2b9-f41b632caf2f-kube-api-access-s2j58\") pod \"downloads-7954f5f757-dcp7n\" (UID: \"1e97f198-d45d-4071-a2b9-f41b632caf2f\") " pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195511 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195566 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/790fce65-a57d-4790-9772-954b8473cc10-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195609 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7e59e961-fb45-4454-bc76-a17976c8bac7-signing-cabundle\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f01ad59-6644-40b0-9a55-8677c520a74c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195648 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-config\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195665 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xglbm\" (UniqueName: \"kubernetes.io/projected/3786342c-4f95-48d8-930d-ff9106d6b498-kube-api-access-xglbm\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195687 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f567caf-8d53-4977-b270-da4a3c50a910-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195707 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f01ad59-6644-40b0-9a55-8677c520a74c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195733 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/920c998c-9d29-4358-8607-a793e7873a17-node-bootstrap-token\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195770 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-srv-cert\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195786 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-tls\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195805 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195825 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhkb\" (UniqueName: \"kubernetes.io/projected/2f567caf-8d53-4977-b270-da4a3c50a910-kube-api-access-4rhkb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195843 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b49fcb5-205f-42cc-92a5-eb8190c06f15-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24c4f50-5254-4170-a11d-b8cf49841066-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195907 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-socket-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195922 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-registration-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-serving-cert\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195976 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78k8\" (UniqueName: \"kubernetes.io/projected/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-kube-api-access-g78k8\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.195997 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjq8v\" (UniqueName: \"kubernetes.io/projected/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-kube-api-access-bjq8v\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-client-ca\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196042 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-policies\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196061 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196097 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae2d8fa-6975-4490-817e-feab685f5f01-config\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196117 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-metrics-certs\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196152 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196170 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196187 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7xb\" (UniqueName: \"kubernetes.io/projected/2718b5b9-bc27-48a2-8b90-5755195f6e1e-kube-api-access-2s7xb\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196221 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d139b8f8-a2b9-499f-a656-fc524a69d712-config\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196238 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d909dc1-ec42-4a5a-afe0-def24b1342ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196252 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-mountpoint-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196269 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-bound-sa-token\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196285 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1baac430-a512-49db-b763-45522a1db0b5-auth-proxy-config\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196301 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-stats-auth\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196316 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24c4f50-5254-4170-a11d-b8cf49841066-config\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196334 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2718b5b9-bc27-48a2-8b90-5755195f6e1e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196352 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-default-certificate\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196371 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-csi-data-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196389 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-config\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2718b5b9-bc27-48a2-8b90-5755195f6e1e-trusted-ca\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196428 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6tfb\" (UniqueName: \"kubernetes.io/projected/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-kube-api-access-z6tfb\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.196443 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-client-ca\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197090 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-oauth-serving-cert\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197441 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-oauth-serving-cert\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197528 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gw5p\" (UniqueName: \"kubernetes.io/projected/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-kube-api-access-4gw5p\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197573 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4dc8416-2dc7-4e04-8228-3f90015ecbda-service-ca-bundle\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197595 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b565a80-c263-443c-8e9e-eab07a0692ef-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/790fce65-a57d-4790-9772-954b8473cc10-images\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197631 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919e8487-ff15-4a8f-a5e0-dca045e62ae2-config-volume\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197651 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d909dc1-ec42-4a5a-afe0-def24b1342ab-webhook-cert\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197669 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7e59e961-fb45-4454-bc76-a17976c8bac7-signing-key\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197712 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-metrics-tls\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197729 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-client\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197754 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s772\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-kube-api-access-4s772\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-config\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197795 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197813 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f24c4f50-5254-4170-a11d-b8cf49841066-images\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197833 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e816aea-da9c-42ed-a56e-d373828fbee5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zsc2h\" (UID: \"9e816aea-da9c-42ed-a56e-d373828fbee5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197855 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-config\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197871 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d139b8f8-a2b9-499f-a656-fc524a69d712-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197897 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzx4s\" (UniqueName: \"kubernetes.io/projected/d4dc8416-2dc7-4e04-8228-3f90015ecbda-kube-api-access-lzx4s\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197914 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f567caf-8d53-4977-b270-da4a3c50a910-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197932 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197949 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197968 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr2r\" (UniqueName: \"kubernetes.io/projected/3fd11507-99ee-4baf-8f4c-35ee40b8ed05-kube-api-access-plr2r\") pod \"ingress-canary-4ds2w\" (UID: \"3fd11507-99ee-4baf-8f4c-35ee40b8ed05\") " pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.197986 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-proxy-tls\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198001 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198021 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae2d8fa-6975-4490-817e-feab685f5f01-serving-cert\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198040 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-trusted-ca-bundle\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198056 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6xg\" (UniqueName: \"kubernetes.io/projected/96e34f38-5eb1-447d-959a-3310d8208b67-kube-api-access-ls6xg\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198081 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-certificates\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198098 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plv6v\" (UniqueName: \"kubernetes.io/projected/b51dbfee-ce3e-4c24-946c-f7481c5c286a-kube-api-access-plv6v\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198117 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z5rh\" (UniqueName: \"kubernetes.io/projected/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-kube-api-access-2z5rh\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198148 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0d909dc1-ec42-4a5a-afe0-def24b1342ab-tmpfs\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfh29\" (UniqueName: \"kubernetes.io/projected/920c998c-9d29-4358-8607-a793e7873a17-kube-api-access-sfh29\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-trusted-ca\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d139b8f8-a2b9-499f-a656-fc524a69d712-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198221 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zml\" (UniqueName: \"kubernetes.io/projected/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-kube-api-access-c4zml\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198250 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7w7\" (UniqueName: \"kubernetes.io/projected/0ae2d8fa-6975-4490-817e-feab685f5f01-kube-api-access-dj7w7\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.198265 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llv57\" (UniqueName: \"kubernetes.io/projected/9e816aea-da9c-42ed-a56e-d373828fbee5-kube-api-access-llv57\") pod \"package-server-manager-789f6589d5-zsc2h\" (UID: \"9e816aea-da9c-42ed-a56e-d373828fbee5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.199363 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4dc8416-2dc7-4e04-8228-3f90015ecbda-service-ca-bundle\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.204983 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b565a80-c263-443c-8e9e-eab07a0692ef-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.206410 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-config\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.209076 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8cwqh"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.209135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f24c4f50-5254-4170-a11d-b8cf49841066-images\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.209214 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f567caf-8d53-4977-b270-da4a3c50a910-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.209797 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-config\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.209768 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-certificates\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.210687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ae2d8fa-6975-4490-817e-feab685f5f01-serving-cert\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.211211 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f24c4f50-5254-4170-a11d-b8cf49841066-config\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.212502 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1baac430-a512-49db-b763-45522a1db0b5-auth-proxy-config\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.212580 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f567caf-8d53-4977-b270-da4a3c50a910-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.213705 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-trusted-ca-bundle\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.213991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f24c4f50-5254-4170-a11d-b8cf49841066-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.214110 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-client-ca\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.214937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d139b8f8-a2b9-499f-a656-fc524a69d712-config\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.215108 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ae2d8fa-6975-4490-817e-feab685f5f01-config\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.215724 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-trusted-ca\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.215942 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d139b8f8-a2b9-499f-a656-fc524a69d712-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.216944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-default-certificate\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.217762 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-metrics-certs\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.217988 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d4dc8416-2dc7-4e04-8228-3f90015ecbda-stats-auth\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.218178 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.219052 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.219384 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-policies\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.220060 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b49fcb5-205f-42cc-92a5-eb8190c06f15-installation-pull-secrets\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.220186 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-serving-cert\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.220691 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.220731 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.221371 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-tls\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.222514 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.222573 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-serving-cert\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: W0129 15:30:50.222623 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5848904_e808_42fe_a7d6_1710f0dc09a2.slice/crio-02feb042bee18fd804b1dd2918cd5c3293b592d2de488366599485bc35e9f6fe WatchSource:0}: Error finding container 02feb042bee18fd804b1dd2918cd5c3293b592d2de488366599485bc35e9f6fe: Status 404 returned error can't find the container with id 02feb042bee18fd804b1dd2918cd5c3293b592d2de488366599485bc35e9f6fe Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.223024 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.231284 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwxn9\" (UniqueName: \"kubernetes.io/projected/f24c4f50-5254-4170-a11d-b8cf49841066-kube-api-access-hwxn9\") pod \"machine-api-operator-5694c8668f-g8qkk\" (UID: \"f24c4f50-5254-4170-a11d-b8cf49841066\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.243624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hl5\" (UniqueName: \"kubernetes.io/projected/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-kube-api-access-78hl5\") pod \"console-f9d7485db-zskhr\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.246222 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.264056 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764gs\" (UniqueName: \"kubernetes.io/projected/1baac430-a512-49db-b763-45522a1db0b5-kube-api-access-764gs\") pod \"machine-approver-56656f9798-c9z4r\" (UID: \"1baac430-a512-49db-b763-45522a1db0b5\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.286995 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.303871 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d909dc1-ec42-4a5a-afe0-def24b1342ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304176 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-mountpoint-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s7xb\" (UniqueName: \"kubernetes.io/projected/2718b5b9-bc27-48a2-8b90-5755195f6e1e-kube-api-access-2s7xb\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2718b5b9-bc27-48a2-8b90-5755195f6e1e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304258 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-csi-data-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304279 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-config\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304299 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2718b5b9-bc27-48a2-8b90-5755195f6e1e-trusted-ca\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304320 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-client-ca\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304345 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6tfb\" (UniqueName: \"kubernetes.io/projected/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-kube-api-access-z6tfb\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304370 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/790fce65-a57d-4790-9772-954b8473cc10-images\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304393 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919e8487-ff15-4a8f-a5e0-dca045e62ae2-config-volume\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304418 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gw5p\" (UniqueName: \"kubernetes.io/projected/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-kube-api-access-4gw5p\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304441 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d909dc1-ec42-4a5a-afe0-def24b1342ab-webhook-cert\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304489 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7e59e961-fb45-4454-bc76-a17976c8bac7-signing-key\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304516 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-metrics-tls\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304538 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-client\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e816aea-da9c-42ed-a56e-d373828fbee5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zsc2h\" (UID: \"9e816aea-da9c-42ed-a56e-d373828fbee5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304644 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304696 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plr2r\" (UniqueName: \"kubernetes.io/projected/3fd11507-99ee-4baf-8f4c-35ee40b8ed05-kube-api-access-plr2r\") pod \"ingress-canary-4ds2w\" (UID: \"3fd11507-99ee-4baf-8f4c-35ee40b8ed05\") " pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304717 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-proxy-tls\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304740 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304764 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6xg\" (UniqueName: \"kubernetes.io/projected/96e34f38-5eb1-447d-959a-3310d8208b67-kube-api-access-ls6xg\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304795 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z5rh\" (UniqueName: \"kubernetes.io/projected/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-kube-api-access-2z5rh\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304817 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0d909dc1-ec42-4a5a-afe0-def24b1342ab-tmpfs\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304840 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfh29\" (UniqueName: \"kubernetes.io/projected/920c998c-9d29-4358-8607-a793e7873a17-kube-api-access-sfh29\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304873 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zml\" (UniqueName: \"kubernetes.io/projected/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-kube-api-access-c4zml\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llv57\" (UniqueName: \"kubernetes.io/projected/9e816aea-da9c-42ed-a56e-d373828fbee5-kube-api-access-llv57\") pod \"package-server-manager-789f6589d5-zsc2h\" (UID: \"9e816aea-da9c-42ed-a56e-d373828fbee5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwnhm\" (UniqueName: \"kubernetes.io/projected/919e8487-ff15-4a8f-a5e0-dca045e62ae2-kube-api-access-kwnhm\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304955 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-config\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.304977 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786342c-4f95-48d8-930d-ff9106d6b498-serving-cert\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305023 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/790fce65-a57d-4790-9772-954b8473cc10-proxy-tls\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbs6r\" (UniqueName: \"kubernetes.io/projected/790fce65-a57d-4790-9772-954b8473cc10-kube-api-access-nbs6r\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305074 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f01ad59-6644-40b0-9a55-8677c520a74c-config\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305107 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6jk\" (UniqueName: \"kubernetes.io/projected/0d909dc1-ec42-4a5a-afe0-def24b1342ab-kube-api-access-9k6jk\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbkh5\" (UniqueName: \"kubernetes.io/projected/19984771-6056-43f4-9d47-b90b4ad1f73a-kube-api-access-zbkh5\") pod \"multus-admission-controller-857f4d67dd-z6rg2\" (UID: \"19984771-6056-43f4-9d47-b90b4ad1f73a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305162 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919e8487-ff15-4a8f-a5e0-dca045e62ae2-secret-volume\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-service-ca\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305210 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2718b5b9-bc27-48a2-8b90-5755195f6e1e-metrics-tls\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305243 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19984771-6056-43f4-9d47-b90b4ad1f73a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z6rg2\" (UID: \"19984771-6056-43f4-9d47-b90b4ad1f73a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305268 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm89c\" (UniqueName: \"kubernetes.io/projected/724b0b6d-fe63-47f2-85dd-90d5accd7cf3-kube-api-access-sm89c\") pod \"control-plane-machine-set-operator-78cbb6b69f-8vm4c\" (UID: \"724b0b6d-fe63-47f2-85dd-90d5accd7cf3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305304 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3fd11507-99ee-4baf-8f4c-35ee40b8ed05-cert\") pod \"ingress-canary-4ds2w\" (UID: \"3fd11507-99ee-4baf-8f4c-35ee40b8ed05\") " pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305329 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/724b0b6d-fe63-47f2-85dd-90d5accd7cf3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8vm4c\" (UID: \"724b0b6d-fe63-47f2-85dd-90d5accd7cf3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305353 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-serving-cert\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cnf\" (UniqueName: \"kubernetes.io/projected/31485654-ce13-4b9b-a5b0-907dbd8a9e70-kube-api-access-t8cnf\") pod \"migrator-59844c95c7-6jqq5\" (UID: \"31485654-ce13-4b9b-a5b0-907dbd8a9e70\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/920c998c-9d29-4358-8607-a793e7873a17-certs\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305424 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-config-volume\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305447 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-serving-cert\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305490 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-ca\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305514 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cqbz\" (UniqueName: \"kubernetes.io/projected/0d653843-6818-4b28-a3d8-53407876dfe8-kube-api-access-9cqbz\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-srv-cert\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305573 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwb49\" (UniqueName: \"kubernetes.io/projected/7e59e961-fb45-4454-bc76-a17976c8bac7-kube-api-access-fwb49\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305595 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-plugins-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305653 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/790fce65-a57d-4790-9772-954b8473cc10-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305677 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7e59e961-fb45-4454-bc76-a17976c8bac7-signing-cabundle\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305698 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f01ad59-6644-40b0-9a55-8677c520a74c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-config\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xglbm\" (UniqueName: \"kubernetes.io/projected/3786342c-4f95-48d8-930d-ff9106d6b498-kube-api-access-xglbm\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305765 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f01ad59-6644-40b0-9a55-8677c520a74c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-srv-cert\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305814 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/920c998c-9d29-4358-8607-a793e7873a17-node-bootstrap-token\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305837 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305867 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-socket-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305889 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-registration-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305914 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78k8\" (UniqueName: \"kubernetes.io/projected/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-kube-api-access-g78k8\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.305937 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjq8v\" (UniqueName: \"kubernetes.io/projected/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-kube-api-access-bjq8v\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.306585 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2718b5b9-bc27-48a2-8b90-5755195f6e1e-trusted-ca\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.306739 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:50.806707472 +0000 UTC m=+152.999751296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.306905 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-client-ca\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.307692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/790fce65-a57d-4790-9772-954b8473cc10-images\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.308408 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919e8487-ff15-4a8f-a5e0-dca045e62ae2-config-volume\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.314914 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-csi-data-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.314965 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-mountpoint-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.315225 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d909dc1-ec42-4a5a-afe0-def24b1342ab-webhook-cert\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.315548 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d909dc1-ec42-4a5a-afe0-def24b1342ab-apiservice-cert\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.315895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-config\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.317142 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-serving-cert\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.317321 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-ca\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.320261 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klzsw\" (UniqueName: \"kubernetes.io/projected/5b565a80-c263-443c-8e9e-eab07a0692ef-kube-api-access-klzsw\") pod \"openshift-apiserver-operator-796bbdcf4f-zvqf2\" (UID: \"5b565a80-c263-443c-8e9e-eab07a0692ef\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.330483 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-config\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.330867 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786342c-4f95-48d8-930d-ff9106d6b498-serving-cert\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.331905 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f01ad59-6644-40b0-9a55-8677c520a74c-config\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.337176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-socket-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.339835 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.340337 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-service-ca\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.343213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0d909dc1-ec42-4a5a-afe0-def24b1342ab-tmpfs\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.343902 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-plugins-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.350311 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.357880 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/790fce65-a57d-4790-9772-954b8473cc10-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.360369 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" event={"ID":"c5848904-e808-42fe-a7d6-1710f0dc09a2","Type":"ContainerStarted","Data":"02feb042bee18fd804b1dd2918cd5c3293b592d2de488366599485bc35e9f6fe"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.361439 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7e59e961-fb45-4454-bc76-a17976c8bac7-signing-cabundle\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.361692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96e34f38-5eb1-447d-959a-3310d8208b67-registration-dir\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.361971 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-config\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.362203 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7e59e961-fb45-4454-bc76-a17976c8bac7-signing-key\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.362482 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-config-volume\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.362562 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.363451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.366395 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/920c998c-9d29-4358-8607-a793e7873a17-certs\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.364371 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f01ad59-6644-40b0-9a55-8677c520a74c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.364847 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3fd11507-99ee-4baf-8f4c-35ee40b8ed05-cert\") pod \"ingress-canary-4ds2w\" (UID: \"3fd11507-99ee-4baf-8f4c-35ee40b8ed05\") " pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.364998 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/790fce65-a57d-4790-9772-954b8473cc10-proxy-tls\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.365340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-etcd-client\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.365364 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2718b5b9-bc27-48a2-8b90-5755195f6e1e-metrics-tls\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.365434 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/19984771-6056-43f4-9d47-b90b4ad1f73a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z6rg2\" (UID: \"19984771-6056-43f4-9d47-b90b4ad1f73a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.365659 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-metrics-tls\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.365745 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.365918 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e816aea-da9c-42ed-a56e-d373828fbee5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zsc2h\" (UID: \"9e816aea-da9c-42ed-a56e-d373828fbee5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.364276 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-proxy-tls\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.367039 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-srv-cert\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.367557 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-serving-cert\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.367879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r5pg\" (UniqueName: \"kubernetes.io/projected/121ed143-6a9d-4b89-b328-e0b820a65fa1-kube-api-access-2r5pg\") pod \"authentication-operator-69f744f599-nllch\" (UID: \"121ed143-6a9d-4b89-b328-e0b820a65fa1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.368413 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-profile-collector-cert\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.368903 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" event={"ID":"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb","Type":"ContainerStarted","Data":"764ae1f57290da1cfe1d40e521ce5aafe8f293651c2e54e27994b57c9d16ae06"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.369036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" event={"ID":"a4c0f0a9-759e-447c-bd48-dc4ac0dab3fb","Type":"ContainerStarted","Data":"0a42c718c9bd3d043d7953ba02446366df7066ee4b42e3bda92183308e179425"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.369255 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/724b0b6d-fe63-47f2-85dd-90d5accd7cf3-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8vm4c\" (UID: \"724b0b6d-fe63-47f2-85dd-90d5accd7cf3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.369643 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j58\" (UniqueName: \"kubernetes.io/projected/1e97f198-d45d-4071-a2b9-f41b632caf2f-kube-api-access-s2j58\") pod \"downloads-7954f5f757-dcp7n\" (UID: \"1e97f198-d45d-4071-a2b9-f41b632caf2f\") " pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.372193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/920c998c-9d29-4358-8607-a793e7873a17-node-bootstrap-token\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.375317 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-srv-cert\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.375335 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gvg\" (UniqueName: \"kubernetes.io/projected/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-kube-api-access-d8gvg\") pod \"route-controller-manager-6576b87f9c-g8gtz\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.375865 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919e8487-ff15-4a8f-a5e0-dca045e62ae2-secret-volume\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.376692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-profile-collector-cert\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.381755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" event={"ID":"de99438b-f023-4f1f-97e0-25dcebd1b0bf","Type":"ContainerStarted","Data":"53d92a6713bd078dd7a5f5e8100b66fe600b7854e97de97bd606c618301accaf"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.381812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" event={"ID":"de99438b-f023-4f1f-97e0-25dcebd1b0bf","Type":"ContainerStarted","Data":"3cee8e786e1d56536a873ec9e1984b17257689049bd43aa3a825208b68ad20d2"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.382434 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.388860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s772\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-kube-api-access-4s772\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.399457 4835 generic.go:334] "Generic (PLEG): container finished" podID="a6394550-97b4-49a5-b726-602943985ef8" containerID="8dfdc410526866bb80d5f8c361131c5dcd0aa4cd746d4a2ce3d99632e2a245cd" exitCode=0 Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.399521 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" event={"ID":"a6394550-97b4-49a5-b726-602943985ef8","Type":"ContainerDied","Data":"8dfdc410526866bb80d5f8c361131c5dcd0aa4cd746d4a2ce3d99632e2a245cd"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.399547 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" event={"ID":"a6394550-97b4-49a5-b726-602943985ef8","Type":"ContainerStarted","Data":"9974fc4ff10edea83ff6564b3b04fa10b2ba38fd46edb48ffbc74487fd2eaf90"} Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.410691 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.415186 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.417619 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:50.917596195 +0000 UTC m=+153.110639849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.421228 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.425762 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d139b8f8-a2b9-499f-a656-fc524a69d712-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d6nj4\" (UID: \"d139b8f8-a2b9-499f-a656-fc524a69d712\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.431689 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plv6v\" (UniqueName: \"kubernetes.io/projected/b51dbfee-ce3e-4c24-946c-f7481c5c286a-kube-api-access-plv6v\") pod \"oauth-openshift-558db77b4-wrghl\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.448997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7w7\" (UniqueName: \"kubernetes.io/projected/0ae2d8fa-6975-4490-817e-feab685f5f01-kube-api-access-dj7w7\") pod \"console-operator-58897d9998-88xw8\" (UID: \"0ae2d8fa-6975-4490-817e-feab685f5f01\") " pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.467740 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.484092 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.485868 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzx4s\" (UniqueName: \"kubernetes.io/projected/d4dc8416-2dc7-4e04-8228-3f90015ecbda-kube-api-access-lzx4s\") pod \"router-default-5444994796-2v9n5\" (UID: \"d4dc8416-2dc7-4e04-8228-3f90015ecbda\") " pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.486499 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.489648 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-bound-sa-token\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.504394 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhkb\" (UniqueName: \"kubernetes.io/projected/2f567caf-8d53-4977-b270-da4a3c50a910-kube-api-access-4rhkb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rfx2l\" (UID: \"2f567caf-8d53-4977-b270-da4a3c50a910\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.512248 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.516890 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.517196 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.017165465 +0000 UTC m=+153.210209129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.524860 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llv57\" (UniqueName: \"kubernetes.io/projected/9e816aea-da9c-42ed-a56e-d373828fbee5-kube-api-access-llv57\") pod \"package-server-manager-789f6589d5-zsc2h\" (UID: \"9e816aea-da9c-42ed-a56e-d373828fbee5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.525287 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.526090 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.528349 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.028328643 +0000 UTC m=+153.221372297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.531435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.543759 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.560116 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjq8v\" (UniqueName: \"kubernetes.io/projected/1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa-kube-api-access-bjq8v\") pod \"olm-operator-6b444d44fb-xdsww\" (UID: \"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.564589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwnhm\" (UniqueName: \"kubernetes.io/projected/919e8487-ff15-4a8f-a5e0-dca045e62ae2-kube-api-access-kwnhm\") pod \"collect-profiles-29495010-pgn6g\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.569171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.573204 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.583936 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6tfb\" (UniqueName: \"kubernetes.io/projected/af4d1d12-1cc3-4e15-896e-7ddd1d4bf108-kube-api-access-z6tfb\") pod \"machine-config-controller-84d6567774-g94r2\" (UID: \"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.600189 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.604586 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gw5p\" (UniqueName: \"kubernetes.io/projected/0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4-kube-api-access-4gw5p\") pod \"etcd-operator-b45778765-c8qfs\" (UID: \"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.610647 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.630887 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.632321 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.132304404 +0000 UTC m=+153.325348058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.637272 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.646405 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2718b5b9-bc27-48a2-8b90-5755195f6e1e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.646175 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s7xb\" (UniqueName: \"kubernetes.io/projected/2718b5b9-bc27-48a2-8b90-5755195f6e1e-kube-api-access-2s7xb\") pod \"ingress-operator-5b745b69d9-26lrq\" (UID: \"2718b5b9-bc27-48a2-8b90-5755195f6e1e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.659111 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.670818 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f01ad59-6644-40b0-9a55-8677c520a74c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-grg4n\" (UID: \"6f01ad59-6644-40b0-9a55-8677c520a74c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.675379 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.677779 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.691073 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr2r\" (UniqueName: \"kubernetes.io/projected/3fd11507-99ee-4baf-8f4c-35ee40b8ed05-kube-api-access-plr2r\") pod \"ingress-canary-4ds2w\" (UID: \"3fd11507-99ee-4baf-8f4c-35ee40b8ed05\") " pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.701054 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.705212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xglbm\" (UniqueName: \"kubernetes.io/projected/3786342c-4f95-48d8-930d-ff9106d6b498-kube-api-access-xglbm\") pod \"controller-manager-879f6c89f-t66v2\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.731259 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4ds2w" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.732255 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.732584 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.232572152 +0000 UTC m=+153.425615796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.735632 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbs6r\" (UniqueName: \"kubernetes.io/projected/790fce65-a57d-4790-9772-954b8473cc10-kube-api-access-nbs6r\") pod \"machine-config-operator-74547568cd-pz8pv\" (UID: \"790fce65-a57d-4790-9772-954b8473cc10\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.736566 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.743742 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.750774 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm89c\" (UniqueName: \"kubernetes.io/projected/724b0b6d-fe63-47f2-85dd-90d5accd7cf3-kube-api-access-sm89c\") pod \"control-plane-machine-set-operator-78cbb6b69f-8vm4c\" (UID: \"724b0b6d-fe63-47f2-85dd-90d5accd7cf3\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.770419 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6jk\" (UniqueName: \"kubernetes.io/projected/0d909dc1-ec42-4a5a-afe0-def24b1342ab-kube-api-access-9k6jk\") pod \"packageserver-d55dfcdfc-sp4ll\" (UID: \"0d909dc1-ec42-4a5a-afe0-def24b1342ab\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:50 crc kubenswrapper[4835]: W0129 15:30:50.773875 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd0f506f_1e2f_4824_b95d_e97abd907131.slice/crio-b6aea0798bdff7152fe6f133eac300a75244c6a23ce4c0f5dbb8e5ba7cea6721 WatchSource:0}: Error finding container b6aea0798bdff7152fe6f133eac300a75244c6a23ce4c0f5dbb8e5ba7cea6721: Status 404 returned error can't find the container with id b6aea0798bdff7152fe6f133eac300a75244c6a23ce4c0f5dbb8e5ba7cea6721 Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.796052 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfh29\" (UniqueName: \"kubernetes.io/projected/920c998c-9d29-4358-8607-a793e7873a17-kube-api-access-sfh29\") pod \"machine-config-server-jgjrx\" (UID: \"920c998c-9d29-4358-8607-a793e7873a17\") " pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.820413 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.823595 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cqbz\" (UniqueName: \"kubernetes.io/projected/0d653843-6818-4b28-a3d8-53407876dfe8-kube-api-access-9cqbz\") pod \"marketplace-operator-79b997595-cfwcj\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.832958 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.833137 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zml\" (UniqueName: \"kubernetes.io/projected/79c1d489-95dd-4f18-a7b3-92fcbaec6e03-kube-api-access-c4zml\") pod \"dns-default-gx7b9\" (UID: \"79c1d489-95dd-4f18-a7b3-92fcbaec6e03\") " pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.833160 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.333129098 +0000 UTC m=+153.526172752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.833379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.834891 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.334865811 +0000 UTC m=+153.527909465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.849224 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbkh5\" (UniqueName: \"kubernetes.io/projected/19984771-6056-43f4-9d47-b90b4ad1f73a-kube-api-access-zbkh5\") pod \"multus-admission-controller-857f4d67dd-z6rg2\" (UID: \"19984771-6056-43f4-9d47-b90b4ad1f73a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.876765 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z5rh\" (UniqueName: \"kubernetes.io/projected/2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd-kube-api-access-2z5rh\") pod \"catalog-operator-68c6474976-8w8z6\" (UID: \"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.879680 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.885614 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cnf\" (UniqueName: \"kubernetes.io/projected/31485654-ce13-4b9b-a5b0-907dbd8a9e70-kube-api-access-t8cnf\") pod \"migrator-59844c95c7-6jqq5\" (UID: \"31485654-ce13-4b9b-a5b0-907dbd8a9e70\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.888346 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2"] Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.897766 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.911843 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78k8\" (UniqueName: \"kubernetes.io/projected/ecf17647-65a6-4798-bac2-7f8dffd3ad8f-kube-api-access-g78k8\") pod \"service-ca-operator-777779d784-cpfv6\" (UID: \"ecf17647-65a6-4798-bac2-7f8dffd3ad8f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.920397 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.928404 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.935066 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:50 crc kubenswrapper[4835]: E0129 15:30:50.935526 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.435508419 +0000 UTC m=+153.628552073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.935926 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.942109 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.942140 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6xg\" (UniqueName: \"kubernetes.io/projected/96e34f38-5eb1-447d-959a-3310d8208b67-kube-api-access-ls6xg\") pod \"csi-hostpathplugin-k2mp9\" (UID: \"96e34f38-5eb1-447d-959a-3310d8208b67\") " pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.952103 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.954736 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwb49\" (UniqueName: \"kubernetes.io/projected/7e59e961-fb45-4454-bc76-a17976c8bac7-kube-api-access-fwb49\") pod \"service-ca-9c57cc56f-zqtcm\" (UID: \"7e59e961-fb45-4454-bc76-a17976c8bac7\") " pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.967891 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" Jan 29 15:30:50 crc kubenswrapper[4835]: I0129 15:30:50.991262 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.007728 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.021772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.024889 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g8qkk"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.037287 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.043991 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.543969061 +0000 UTC m=+153.737012725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.044911 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jgjrx" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.055762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.068097 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.079798 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.138769 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.139266 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.639243145 +0000 UTC m=+153.832286799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.240992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.241900 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.741873872 +0000 UTC m=+153.934917526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.291777 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zskhr"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.342866 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.343128 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.843112545 +0000 UTC m=+154.036156209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.345352 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nllch"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.345395 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wrghl"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.349994 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.356324 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-c8qfs"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.422010 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" event={"ID":"5b565a80-c263-443c-8e9e-eab07a0692ef","Type":"ContainerStarted","Data":"26e7a08cfa4f208a7994f44b42c0e7b3d5acbb6484db31e729033e40c36d0613"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.430868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" event={"ID":"c5848904-e808-42fe-a7d6-1710f0dc09a2","Type":"ContainerStarted","Data":"465c0e278c7bc06f9e72fe385a3987fc6de5f1bf5e1fcab930d836107067ef82"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.431843 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" event={"ID":"f24c4f50-5254-4170-a11d-b8cf49841066","Type":"ContainerStarted","Data":"39412dfd3c5b1a920d88481355d6f88f8896249c4966d7305fbc4a4b4ad5128a"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.433438 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" event={"ID":"c1a6ef06-ee22-4874-a6be-cd8300afc95f","Type":"ContainerStarted","Data":"fe7e2786e9bdb1030df61c125cb8b17c4eafd982c577ac7ebf7303a4b81962a0"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.433460 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" event={"ID":"c1a6ef06-ee22-4874-a6be-cd8300afc95f","Type":"ContainerStarted","Data":"fb0655ee27ff67dd2322aadb75f33e02e2443a7facdfe6c0af6c523045f77893"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.433489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" event={"ID":"c1a6ef06-ee22-4874-a6be-cd8300afc95f","Type":"ContainerStarted","Data":"7cf370875c370ac6db1657e8c1e3ae77d55e2d56c76e79b381f27185f8ce5d58"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.445898 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.446015 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2v9n5" event={"ID":"d4dc8416-2dc7-4e04-8228-3f90015ecbda","Type":"ContainerStarted","Data":"3cb8e6a69e514c768049e72df99c75e57b3ecc73c0b1c5be54ac234241114fa9"} Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.446269 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:51.946255904 +0000 UTC m=+154.139299558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.450496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" event={"ID":"2f567caf-8d53-4977-b270-da4a3c50a910","Type":"ContainerStarted","Data":"7fea98f7d0e71082b0c457a3b2d21aa339b1156b32b20f173c7fe9bfe0ef60ac"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.454009 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" event={"ID":"a6394550-97b4-49a5-b726-602943985ef8","Type":"ContainerStarted","Data":"057159c6cc8f54883657f40027a0edee492ae4d9048c2e666d66f75d8858ba5e"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.455374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" event={"ID":"a7f0be43-3630-48bc-988f-22965b012136","Type":"ContainerStarted","Data":"36357a3d9396872bd0337a943edcd2d34a7b3359e479e25e55fec211bd7d8449"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.458855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" event={"ID":"fd0f506f-1e2f-4824-b95d-e97abd907131","Type":"ContainerStarted","Data":"b6aea0798bdff7152fe6f133eac300a75244c6a23ce4c0f5dbb8e5ba7cea6721"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.460579 4835 generic.go:334] "Generic (PLEG): container finished" podID="de99438b-f023-4f1f-97e0-25dcebd1b0bf" containerID="53d92a6713bd078dd7a5f5e8100b66fe600b7854e97de97bd606c618301accaf" exitCode=0 Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.460672 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" event={"ID":"de99438b-f023-4f1f-97e0-25dcebd1b0bf","Type":"ContainerDied","Data":"53d92a6713bd078dd7a5f5e8100b66fe600b7854e97de97bd606c618301accaf"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.465257 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" event={"ID":"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1","Type":"ContainerStarted","Data":"fd23c4af991d11e4bab37acc8dc681f64d1d1047fc5b970f74a66ed3d9a3c3d1"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.468666 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zskhr" event={"ID":"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4","Type":"ContainerStarted","Data":"aaf4065c34313977154d88212786021bb8dda8f579c41fca8ebc82162c15a488"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.471634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" event={"ID":"1baac430-a512-49db-b763-45522a1db0b5","Type":"ContainerStarted","Data":"ef3db07bb7433016abf3aa61a0d558a52ba26e6fb3d0ecee9ce4970c6c1f2681"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.471695 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" event={"ID":"1baac430-a512-49db-b763-45522a1db0b5","Type":"ContainerStarted","Data":"e835cbc117644406c43f5b482e1d41618ea2a3abc4899420d2f8321aa8c92cfb"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.475442 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" event={"ID":"d9ede1dc-ad19-4261-97f2-605dd20383c1","Type":"ContainerStarted","Data":"0f515a01c04fa951e6db23f4b62e7caec99210c2b675fab6316821421c7f21d8"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.475511 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" event={"ID":"d9ede1dc-ad19-4261-97f2-605dd20383c1","Type":"ContainerStarted","Data":"c539caf2a5ae8dcc63c399d530fd50a16717c4226edfabdc2955ee31ccef8ca9"} Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.556823 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.557110 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.057051465 +0000 UTC m=+154.250095119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.559208 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.579277 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.079250018 +0000 UTC m=+154.272293672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.618771 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.618853 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.672912 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.673584 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.173561298 +0000 UTC m=+154.366604952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.775044 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.775751 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.275736804 +0000 UTC m=+154.468780458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.843339 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dcp7n"] Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.878384 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.878534 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.378513144 +0000 UTC m=+154.571556798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.878774 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.879513 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.379500489 +0000 UTC m=+154.572544483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.984772 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lfc8j" podStartSLOduration=133.98474976 podStartE2EDuration="2m13.98474976s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:51.979387507 +0000 UTC m=+154.172431181" watchObservedRunningTime="2026-01-29 15:30:51.98474976 +0000 UTC m=+154.177793424" Jan 29 15:30:51 crc kubenswrapper[4835]: I0129 15:30:51.992254 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:51 crc kubenswrapper[4835]: E0129 15:30:51.994700 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.494670508 +0000 UTC m=+154.687714162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.093675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.095351 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.595330676 +0000 UTC m=+154.788374360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.120604 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mdbvm" podStartSLOduration=134.120580015 podStartE2EDuration="2m14.120580015s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:52.118757039 +0000 UTC m=+154.311800703" watchObservedRunningTime="2026-01-29 15:30:52.120580015 +0000 UTC m=+154.313623669" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.195080 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.196000 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.695975163 +0000 UTC m=+154.889018827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.196435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.196743 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.696726632 +0000 UTC m=+154.889770286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.288913 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" podStartSLOduration=134.288894748 podStartE2EDuration="2m14.288894748s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:52.245681552 +0000 UTC m=+154.438725206" watchObservedRunningTime="2026-01-29 15:30:52.288894748 +0000 UTC m=+154.481938412" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.298684 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.298807 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.798789225 +0000 UTC m=+154.991832879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.299053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.299412 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.79940033 +0000 UTC m=+154.992443994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.400107 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.400980 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:52.90094701 +0000 UTC m=+155.093990664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.489778 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" podStartSLOduration=134.489761783 podStartE2EDuration="2m14.489761783s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:52.487493587 +0000 UTC m=+154.680537241" watchObservedRunningTime="2026-01-29 15:30:52.489761783 +0000 UTC m=+154.682805437" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.502174 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.502689 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.002673135 +0000 UTC m=+155.195716789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.589107 4835 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-g8gtz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.589170 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" podUID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.603084 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.608150 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.108126442 +0000 UTC m=+155.301170096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.625319 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" event={"ID":"121ed143-6a9d-4b89-b328-e0b820a65fa1","Type":"ContainerStarted","Data":"f32e60341313d5e64b4d0fa9edeed262af8f3702333584613b116901912cf2fd"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632241 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" event={"ID":"c5848904-e808-42fe-a7d6-1710f0dc09a2","Type":"ContainerStarted","Data":"37a674e543459af67339e1050484cc5aee98e260fa22f744ce7c111fe8c09f53"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632373 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4ds2w"] Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632448 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632536 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t66v2"] Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632630 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" event={"ID":"a7f0be43-3630-48bc-988f-22965b012136","Type":"ContainerStarted","Data":"112ca84c6096f444a1f4e22522a6fe12c45f0042e59243892094c0571d389c1a"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" event={"ID":"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1","Type":"ContainerStarted","Data":"79af2b0545ae9ce1b9e99444477ce9206e9d905ab6af27612422c1f7860059f8"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632753 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h"] Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632811 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" event={"ID":"5b565a80-c263-443c-8e9e-eab07a0692ef","Type":"ContainerStarted","Data":"025fa1c0de639399c7393604dacb2134f9b2e43a6daa35257cb68ab7ddf535f3"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632888 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jgjrx" event={"ID":"920c998c-9d29-4358-8607-a793e7873a17","Type":"ContainerStarted","Data":"75071c1db96619f6a86ebda5e4851d6e17469697958860eee0042804d7ec638c"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.632961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dcp7n" event={"ID":"1e97f198-d45d-4071-a2b9-f41b632caf2f","Type":"ContainerStarted","Data":"2b6f6a488e4ea0d88b6c24fd241dd4b78e1b8193ad7266b439315c0aa94dd5aa"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.633020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" event={"ID":"d139b8f8-a2b9-499f-a656-fc524a69d712","Type":"ContainerStarted","Data":"d560a4eff5ce122597d2e11a43d905e00a873e79dde18030058a52de0d2558b7"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.633080 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww"] Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.633137 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" event={"ID":"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4","Type":"ContainerStarted","Data":"7439689ca4ebc4f282629182ed3a04a1352020e0aaeadbb1f057424764702796"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.633518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" event={"ID":"b51dbfee-ce3e-4c24-946c-f7481c5c286a","Type":"ContainerStarted","Data":"d647ce725810a3ac0f9196e551d06aa882b3f200179caffc213462edb5043c74"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.633601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" event={"ID":"fd0f506f-1e2f-4824-b95d-e97abd907131","Type":"ContainerStarted","Data":"1c4d37fbaa0b041b309bb1bafa9805d9db8a6ff510a1d41cf99a11fb1cb2c2eb"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.633673 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2v9n5" event={"ID":"d4dc8416-2dc7-4e04-8228-3f90015ecbda","Type":"ContainerStarted","Data":"ad8c00ac12e50d7fa1f2fa97f4691977b1e07c88c96bc71698eca3b4dec5bb1f"} Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.666604 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8cwqh" podStartSLOduration=134.666582319 podStartE2EDuration="2m14.666582319s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:52.652136219 +0000 UTC m=+154.845179883" watchObservedRunningTime="2026-01-29 15:30:52.666582319 +0000 UTC m=+154.859626073" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.720330 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2"] Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.724458 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.734472 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.23444564 +0000 UTC m=+155.427489294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.765813 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-88xw8"] Jan 29 15:30:52 crc kubenswrapper[4835]: W0129 15:30:52.797413 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ae2d8fa_6975_4490_817e_feab685f5f01.slice/crio-e7af1143938a35446b9c3325b0a2b30a468cb53f085358d68812eade9936045c WatchSource:0}: Error finding container e7af1143938a35446b9c3325b0a2b30a468cb53f085358d68812eade9936045c: Status 404 returned error can't find the container with id e7af1143938a35446b9c3325b0a2b30a468cb53f085358d68812eade9936045c Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.827719 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.828000 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.32795791 +0000 UTC m=+155.521001564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.830975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.833161 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.333136819 +0000 UTC m=+155.526180483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.851623 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zvqf2" podStartSLOduration=134.851594989 podStartE2EDuration="2m14.851594989s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:52.844110692 +0000 UTC m=+155.037154356" watchObservedRunningTime="2026-01-29 15:30:52.851594989 +0000 UTC m=+155.044638643" Jan 29 15:30:52 crc kubenswrapper[4835]: I0129 15:30:52.936734 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:52 crc kubenswrapper[4835]: E0129 15:30:52.937577 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.4375365 +0000 UTC m=+155.630580164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.014418 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.014507 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z6rg2"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.034607 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bx774" podStartSLOduration=135.034585958 podStartE2EDuration="2m15.034585958s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.03104447 +0000 UTC m=+155.224088124" watchObservedRunningTime="2026-01-29 15:30:53.034585958 +0000 UTC m=+155.227629612" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.038343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.038765 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.538750682 +0000 UTC m=+155.731794346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.073257 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" podStartSLOduration=134.07320319 podStartE2EDuration="2m14.07320319s" podCreationTimestamp="2026-01-29 15:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.064890113 +0000 UTC m=+155.257933777" watchObservedRunningTime="2026-01-29 15:30:53.07320319 +0000 UTC m=+155.266246844" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.128066 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jgjrx" podStartSLOduration=6.128046826 podStartE2EDuration="6.128046826s" podCreationTimestamp="2026-01-29 15:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.087978388 +0000 UTC m=+155.281022042" watchObservedRunningTime="2026-01-29 15:30:53.128046826 +0000 UTC m=+155.321090490" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.129587 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vxp4g" podStartSLOduration=135.129579055 podStartE2EDuration="2m15.129579055s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.127931994 +0000 UTC m=+155.320975648" watchObservedRunningTime="2026-01-29 15:30:53.129579055 +0000 UTC m=+155.322622709" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.143411 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.155533 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.655503051 +0000 UTC m=+155.848546705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.155774 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.156110 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.656101866 +0000 UTC m=+155.849145520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.177520 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2v9n5" podStartSLOduration=135.177496389 podStartE2EDuration="2m15.177496389s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.172809512 +0000 UTC m=+155.365853166" watchObservedRunningTime="2026-01-29 15:30:53.177496389 +0000 UTC m=+155.370540043" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.179995 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.213786 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k2mp9"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.221781 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.259082 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.259596 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.759577514 +0000 UTC m=+155.952621178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.264167 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gx7b9"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.272030 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfwcj"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.278740 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-zqtcm"] Jan 29 15:30:53 crc kubenswrapper[4835]: W0129 15:30:53.295084 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod790fce65_a57d_4790_9772_954b8473cc10.slice/crio-c3db7834960c0020c5e0ad960b98ecc56fb254234b0379296c553f3b25a8a559 WatchSource:0}: Error finding container c3db7834960c0020c5e0ad960b98ecc56fb254234b0379296c553f3b25a8a559: Status 404 returned error can't find the container with id c3db7834960c0020c5e0ad960b98ecc56fb254234b0379296c553f3b25a8a559 Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.295649 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c"] Jan 29 15:30:53 crc kubenswrapper[4835]: W0129 15:30:53.301130 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79c1d489_95dd_4f18_a7b3_92fcbaec6e03.slice/crio-20f675b0a35d33a57ba740ddb0bb62a01b61bff462fbaf219a63e9a0215ae67e WatchSource:0}: Error finding container 20f675b0a35d33a57ba740ddb0bb62a01b61bff462fbaf219a63e9a0215ae67e: Status 404 returned error can't find the container with id 20f675b0a35d33a57ba740ddb0bb62a01b61bff462fbaf219a63e9a0215ae67e Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.312493 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.324694 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.326867 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.330409 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.341375 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6"] Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.364854 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.365320 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.865302298 +0000 UTC m=+156.058346022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.466834 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.467168 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:53.967138445 +0000 UTC m=+156.160182099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.569449 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.570212 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.070198423 +0000 UTC m=+156.263242077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.571989 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.588691 4835 patch_prober.go:28] interesting pod/router-default-5444994796-2v9n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:53 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Jan 29 15:30:53 crc kubenswrapper[4835]: [+]process-running ok Jan 29 15:30:53 crc kubenswrapper[4835]: healthz check failed Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.588741 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2v9n5" podUID="d4dc8416-2dc7-4e04-8228-3f90015ecbda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.632678 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" event={"ID":"724b0b6d-fe63-47f2-85dd-90d5accd7cf3","Type":"ContainerStarted","Data":"81da37dd74e7406e8866f4a5162b6654fa3f1ce03e83f342aa3693ec4396968f"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.640237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" event={"ID":"f24c4f50-5254-4170-a11d-b8cf49841066","Type":"ContainerStarted","Data":"64b8b0b4b660b2b8de2fb11a8538d59d01e8e27a42a9fe8e30289bce6c753356"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.640272 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" event={"ID":"f24c4f50-5254-4170-a11d-b8cf49841066","Type":"ContainerStarted","Data":"4d3044bff80748a57f8a282ae1ce4ed9e490293e249a991ae42eb5439062f65f"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.641877 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" event={"ID":"121ed143-6a9d-4b89-b328-e0b820a65fa1","Type":"ContainerStarted","Data":"125373b8414b1a2571bcd89927c71fa3b0696c8903f7397531c488a05f4fa5a0"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.643770 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" event={"ID":"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd","Type":"ContainerStarted","Data":"2dc520d6a21792e9d9feb0a118f396da49ea1d2116fe0cb73d8483690579013b"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.645013 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" event={"ID":"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa","Type":"ContainerStarted","Data":"8b70e45ea6ab7ec7f9e54a095848d6efe0e7e5f1b98d86d60c78657bebe8ca19"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.645145 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" event={"ID":"1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa","Type":"ContainerStarted","Data":"a296f4ad82fbe70d288eae6b98d11065abac532fb8ad1fcecb17339996e814d7"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.645991 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.650841 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xdsww container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.650920 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" podUID="1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.652011 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" event={"ID":"96e34f38-5eb1-447d-959a-3310d8208b67","Type":"ContainerStarted","Data":"6b6f5e2f10fe0468631be489a63e41837caa8bed1e735b525e15c5c25f36fc0e"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.653312 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" event={"ID":"0d909dc1-ec42-4a5a-afe0-def24b1342ab","Type":"ContainerStarted","Data":"44395fea60b5861b19cc513368ad9cddcc9a8efd5619f1adc488d4fff0989e3d"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.655833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" event={"ID":"d139b8f8-a2b9-499f-a656-fc524a69d712","Type":"ContainerStarted","Data":"d1bf0d5000af7ba85db74d92dd94ea99dec1d2d4a6b56b48ae940f53e4fe1eab"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.669954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" event={"ID":"9e816aea-da9c-42ed-a56e-d373828fbee5","Type":"ContainerStarted","Data":"9df2664c2fff863cd8359bcaa4bd8c6d78427d1320410c94ddfa50050ed530b4"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.670000 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" event={"ID":"9e816aea-da9c-42ed-a56e-d373828fbee5","Type":"ContainerStarted","Data":"c8fabf3427538fa293fdf28e7885bf85faa74b334f142db207c10fed322a9841"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.670012 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" event={"ID":"9e816aea-da9c-42ed-a56e-d373828fbee5","Type":"ContainerStarted","Data":"7b609d10c47fabb5729d8033361afbfa3af18e1c7eb791e123fc91523404b105"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.670134 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.670494 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.671755 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.171735913 +0000 UTC m=+156.364779567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.678458 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" event={"ID":"0cd8d148-7ec1-4adf-b0e6-37cf3f7f03e4","Type":"ContainerStarted","Data":"754c3eb76f486843a618252106dfbc06e38f5729acd3fa5292e89779cb507b18"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.703116 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" event={"ID":"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108","Type":"ContainerStarted","Data":"2452f61f4ce977e9a493db6de82f40acf372834ffd2c06a2da169682bdd57043"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.703171 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" event={"ID":"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108","Type":"ContainerStarted","Data":"f0fc6b122b1e8865fdac84e9bc255335c76c432a07152e465445db2ec4e8b08e"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.704825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" event={"ID":"0d653843-6818-4b28-a3d8-53407876dfe8","Type":"ContainerStarted","Data":"8e48776a6f9d8bba470f2641d39e281936d01478f31492459d58bb542a3976d0"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.707793 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4ds2w" event={"ID":"3fd11507-99ee-4baf-8f4c-35ee40b8ed05","Type":"ContainerStarted","Data":"65493340aed1f562711e307592197de3658e06551398cfeb8d866d33e0984b78"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.707837 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4ds2w" event={"ID":"3fd11507-99ee-4baf-8f4c-35ee40b8ed05","Type":"ContainerStarted","Data":"6ec33b6adfdb35018a484a4c5d1d885ca6dd416eb51af8aa14b0f305481eaf08"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.713324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dcp7n" event={"ID":"1e97f198-d45d-4071-a2b9-f41b632caf2f","Type":"ContainerStarted","Data":"dfd8f6e97122a847b0683838d2c8c0d79c1bb17c6f28794bdbd4fc44bffea04e"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.714685 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.721029 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" event={"ID":"31485654-ce13-4b9b-a5b0-907dbd8a9e70","Type":"ContainerStarted","Data":"3f326957a0b3c68927bd0b36e3b1489270cb71882396a9557e64b23105737acb"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.731854 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.731907 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.731841 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" podStartSLOduration=135.73182245 podStartE2EDuration="2m15.73182245s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.731536773 +0000 UTC m=+155.924580427" watchObservedRunningTime="2026-01-29 15:30:53.73182245 +0000 UTC m=+155.924866104" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.733510 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g8qkk" podStartSLOduration=135.733466721 podStartE2EDuration="2m15.733466721s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.695853344 +0000 UTC m=+155.888896998" watchObservedRunningTime="2026-01-29 15:30:53.733466721 +0000 UTC m=+155.926510375" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.757948 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.758715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.762453 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-c8qfs" podStartSLOduration=135.762435793 podStartE2EDuration="2m15.762435793s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.762005832 +0000 UTC m=+155.955049486" watchObservedRunningTime="2026-01-29 15:30:53.762435793 +0000 UTC m=+155.955479457" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.765178 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" event={"ID":"2718b5b9-bc27-48a2-8b90-5755195f6e1e","Type":"ContainerStarted","Data":"13496496d83c3b02b7b9d6b24f55a545fa58db8e55355f0a91d7300fdb6f6ed6"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.765219 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" event={"ID":"2718b5b9-bc27-48a2-8b90-5755195f6e1e","Type":"ContainerStarted","Data":"a39d88a982be3432b326de75c5ce3fa7b54fd7b9c1fc1fa678c175c1d18cd2d3"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.776722 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.778254 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.278238847 +0000 UTC m=+156.471282501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.778663 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.793862 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" event={"ID":"b51dbfee-ce3e-4c24-946c-f7481c5c286a","Type":"ContainerStarted","Data":"0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.802872 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.805900 4835 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wrghl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.805946 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.810447 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" event={"ID":"1baac430-a512-49db-b763-45522a1db0b5","Type":"ContainerStarted","Data":"2751602b1108150a19baa5c7e73dcca1ceb0c8918eedb9ec5ccec2f07d3f6da6"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.819406 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" podStartSLOduration=135.819390832 podStartE2EDuration="2m15.819390832s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.817938536 +0000 UTC m=+156.010982190" watchObservedRunningTime="2026-01-29 15:30:53.819390832 +0000 UTC m=+156.012434486" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.820140 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nllch" podStartSLOduration=135.82013552 podStartE2EDuration="2m15.82013552s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.788835721 +0000 UTC m=+155.981879385" watchObservedRunningTime="2026-01-29 15:30:53.82013552 +0000 UTC m=+156.013179174" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.832780 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gx7b9" event={"ID":"79c1d489-95dd-4f18-a7b3-92fcbaec6e03","Type":"ContainerStarted","Data":"20f675b0a35d33a57ba740ddb0bb62a01b61bff462fbaf219a63e9a0215ae67e"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.835743 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" event={"ID":"6f01ad59-6644-40b0-9a55-8677c520a74c","Type":"ContainerStarted","Data":"3d338ae3582e7378b35ffcabfaebe1bc9de3319d6a91fb40446d04acdff12876"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.838028 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" event={"ID":"ecf17647-65a6-4798-bac2-7f8dffd3ad8f","Type":"ContainerStarted","Data":"7da3443248b5887a707667964cbb31cdc89527d3221f553e81c67ee8819a4aa4"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.840897 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" event={"ID":"790fce65-a57d-4790-9772-954b8473cc10","Type":"ContainerStarted","Data":"c3db7834960c0020c5e0ad960b98ecc56fb254234b0379296c553f3b25a8a559"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.843101 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jgjrx" event={"ID":"920c998c-9d29-4358-8607-a793e7873a17","Type":"ContainerStarted","Data":"c146faed0698639fc7ed06094b9726d6728f3ca4d2469f9e63b1bcc82df018a6"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.845437 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d6nj4" podStartSLOduration=135.8454146 podStartE2EDuration="2m15.8454146s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.845146914 +0000 UTC m=+156.038190568" watchObservedRunningTime="2026-01-29 15:30:53.8454146 +0000 UTC m=+156.038458254" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.855654 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" event={"ID":"2f567caf-8d53-4977-b270-da4a3c50a910","Type":"ContainerStarted","Data":"dff08af7be38843b165facca3d0f0d5a89a80365079e21d3c7fe24c2debd68aa"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.873838 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-88xw8" event={"ID":"0ae2d8fa-6975-4490-817e-feab685f5f01","Type":"ContainerStarted","Data":"370ed8f51bb0e5069063028df9940152cc369e02c4c644e3f45080a98aa6637a"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.873956 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-88xw8" event={"ID":"0ae2d8fa-6975-4490-817e-feab685f5f01","Type":"ContainerStarted","Data":"e7af1143938a35446b9c3325b0a2b30a468cb53f085358d68812eade9936045c"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.874214 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.877332 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.878300 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.378284359 +0000 UTC m=+156.571328003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.879738 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" event={"ID":"de99438b-f023-4f1f-97e0-25dcebd1b0bf","Type":"ContainerStarted","Data":"4ca3fd280f39a2901ace7e520113721fa1af0cbcd8bacdfe90a3be94099f28c8"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.880905 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.881780 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" event={"ID":"7e59e961-fb45-4454-bc76-a17976c8bac7","Type":"ContainerStarted","Data":"2af48bf5862f099ac6245e66fe14ec4066c2bbd9e3cf7cb88d5ca447cf21df27"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.885639 4835 patch_prober.go:28] interesting pod/console-operator-58897d9998-88xw8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.885698 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-88xw8" podUID="0ae2d8fa-6975-4490-817e-feab685f5f01" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.886872 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" event={"ID":"3786342c-4f95-48d8-930d-ff9106d6b498","Type":"ContainerStarted","Data":"52ff2e48b7cea80c6199929ad6dd5d2702dcb17bc59fcbfd4f539c7c8fef1cd5"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.886909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" event={"ID":"3786342c-4f95-48d8-930d-ff9106d6b498","Type":"ContainerStarted","Data":"46f3c1c54c2deec9f98d7818ea4225c7820020d4a918c925e0733847b83fd7d1"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.887041 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.888350 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" event={"ID":"919e8487-ff15-4a8f-a5e0-dca045e62ae2","Type":"ContainerStarted","Data":"4764915d206088f3e93ed4df9d8c69826946c281bdcf7b06893a5f927530c260"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.888683 4835 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-t66v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.888764 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.890950 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4ds2w" podStartSLOduration=6.890940195 podStartE2EDuration="6.890940195s" podCreationTimestamp="2026-01-29 15:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.870073435 +0000 UTC m=+156.063117089" watchObservedRunningTime="2026-01-29 15:30:53.890940195 +0000 UTC m=+156.083983849" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.891194 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dcp7n" podStartSLOduration=135.891189491 podStartE2EDuration="2m15.891189491s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.890039112 +0000 UTC m=+156.083082766" watchObservedRunningTime="2026-01-29 15:30:53.891189491 +0000 UTC m=+156.084233145" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.892415 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" event={"ID":"19984771-6056-43f4-9d47-b90b4ad1f73a","Type":"ContainerStarted","Data":"74ffde4959c8c037af2d2f6036b5429a4bf71b5361a8890603bda77f4fb64e14"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.894708 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zskhr" event={"ID":"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4","Type":"ContainerStarted","Data":"f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca"} Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.936925 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" podStartSLOduration=135.9369067 podStartE2EDuration="2m15.9369067s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.935581147 +0000 UTC m=+156.128624801" watchObservedRunningTime="2026-01-29 15:30:53.9369067 +0000 UTC m=+156.129950354" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.959140 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c9z4r" podStartSLOduration=135.959115793 podStartE2EDuration="2m15.959115793s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.954408636 +0000 UTC m=+156.147452290" watchObservedRunningTime="2026-01-29 15:30:53.959115793 +0000 UTC m=+156.152159447" Jan 29 15:30:53 crc kubenswrapper[4835]: I0129 15:30:53.980179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:53 crc kubenswrapper[4835]: E0129 15:30:53.982584 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.482572978 +0000 UTC m=+156.675616622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.025759 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.039667 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" podStartSLOduration=136.03964744 podStartE2EDuration="2m16.03964744s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:53.991963712 +0000 UTC m=+156.185007366" watchObservedRunningTime="2026-01-29 15:30:54.03964744 +0000 UTC m=+156.232691094" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.041034 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rfx2l" podStartSLOduration=136.041027874 podStartE2EDuration="2m16.041027874s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:54.0388565 +0000 UTC m=+156.231900164" watchObservedRunningTime="2026-01-29 15:30:54.041027874 +0000 UTC m=+156.234071528" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.063630 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zskhr" podStartSLOduration=136.063610097 podStartE2EDuration="2m16.063610097s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:54.060633973 +0000 UTC m=+156.253677637" watchObservedRunningTime="2026-01-29 15:30:54.063610097 +0000 UTC m=+156.256653751" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.080857 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.080997 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.58097761 +0000 UTC m=+156.774021264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.081155 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.081458 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.581449881 +0000 UTC m=+156.774493535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.082836 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-88xw8" podStartSLOduration=136.082825016 podStartE2EDuration="2m16.082825016s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:54.08060291 +0000 UTC m=+156.273646564" watchObservedRunningTime="2026-01-29 15:30:54.082825016 +0000 UTC m=+156.275868660" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.108244 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" podStartSLOduration=136.108219558 podStartE2EDuration="2m16.108219558s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:54.103309146 +0000 UTC m=+156.296352800" watchObservedRunningTime="2026-01-29 15:30:54.108219558 +0000 UTC m=+156.301263222" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.181996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.182220 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.682189511 +0000 UTC m=+156.875233165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.182407 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.182752 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.682743775 +0000 UTC m=+156.875787419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.283671 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.283804 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.783781823 +0000 UTC m=+156.976825477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.283942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.284291 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.784279115 +0000 UTC m=+156.977322769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.385227 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.385406 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.885386254 +0000 UTC m=+157.078429908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.385532 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.386228 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.886203814 +0000 UTC m=+157.079247468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.486535 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.486767 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.986743909 +0000 UTC m=+157.179787563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.487264 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.487651 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:54.987639482 +0000 UTC m=+157.180683136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.573515 4835 patch_prober.go:28] interesting pod/router-default-5444994796-2v9n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:54 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Jan 29 15:30:54 crc kubenswrapper[4835]: [+]process-running ok Jan 29 15:30:54 crc kubenswrapper[4835]: healthz check failed Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.573629 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2v9n5" podUID="d4dc8416-2dc7-4e04-8228-3f90015ecbda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.588430 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.588680 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.088663859 +0000 UTC m=+157.281707513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.675039 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.675114 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.690554 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.691010 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.190990368 +0000 UTC m=+157.384034022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.692697 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.791775 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.791985 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.291954114 +0000 UTC m=+157.484997768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.792466 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.792813 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.292805855 +0000 UTC m=+157.485849509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.893442 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.893651 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.393619707 +0000 UTC m=+157.586663361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.893702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.894337 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.394283444 +0000 UTC m=+157.587327098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.902185 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" event={"ID":"af4d1d12-1cc3-4e15-896e-7ddd1d4bf108","Type":"ContainerStarted","Data":"dc9bf1f574a8ba6a117046e7b8ab20e2a0ea07ac42307989eff259a8f446b3d7"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.903979 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" event={"ID":"0d653843-6818-4b28-a3d8-53407876dfe8","Type":"ContainerStarted","Data":"5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.905479 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" event={"ID":"0d909dc1-ec42-4a5a-afe0-def24b1342ab","Type":"ContainerStarted","Data":"fc3f41a81a2e9faeb96623d5c30b0db119ac2d0f30ee368f495b10b5dd7cb154"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.907269 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gx7b9" event={"ID":"79c1d489-95dd-4f18-a7b3-92fcbaec6e03","Type":"ContainerStarted","Data":"47ed52f2dc56c7b72db0c95815c2a1c915c3a5e03cc68fe82c5f804780d07099"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.909111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" event={"ID":"6f01ad59-6644-40b0-9a55-8677c520a74c","Type":"ContainerStarted","Data":"92caca1ebe0e41bb9f3991cf4a5315e16b6ffb7178e67adbe5a3b0b6de4f7657"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.910897 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" event={"ID":"2718b5b9-bc27-48a2-8b90-5755195f6e1e","Type":"ContainerStarted","Data":"39ce0e921f183db082a69d92f6390a6651cefaf436188be1ae41f29eb66f522d"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.914848 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" event={"ID":"19984771-6056-43f4-9d47-b90b4ad1f73a","Type":"ContainerStarted","Data":"7a0196fa9ad24f0451df021d6735d97f99800aea5cd5ddcebdbb7d6d7e4cb2a0"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.920583 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" event={"ID":"919e8487-ff15-4a8f-a5e0-dca045e62ae2","Type":"ContainerStarted","Data":"1d49033fac16d3274421a6493bc32706476436fdbe26e6b04d7cc6d38fe32b24"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.923261 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" event={"ID":"ecf17647-65a6-4798-bac2-7f8dffd3ad8f","Type":"ContainerStarted","Data":"52b3b16355c782eed4bceff6a2223e1e893e8c4309ec2819e81bd5e72770b6f6"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.924883 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" event={"ID":"724b0b6d-fe63-47f2-85dd-90d5accd7cf3","Type":"ContainerStarted","Data":"44a43e68eedd55418d144fe84cf4d5eb8050af540618d1bb1f39fd049b5a5c74"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.927054 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" event={"ID":"7e59e961-fb45-4454-bc76-a17976c8bac7","Type":"ContainerStarted","Data":"25371b5d38949feb69830cd4ea83cb31ad4c4e44ca92a2d30c2e08bb59694566"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.928742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" event={"ID":"790fce65-a57d-4790-9772-954b8473cc10","Type":"ContainerStarted","Data":"316cc9e723779f07bdced91745c74181276e1d3d731dda6178c97b2887cfd446"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.930483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" event={"ID":"31485654-ce13-4b9b-a5b0-907dbd8a9e70","Type":"ContainerStarted","Data":"96d6003dc653828a9ae0cfaa965d6e1acffe6363339ca43c83890eff43fcded4"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.934123 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-26lrq" podStartSLOduration=136.934096076 podStartE2EDuration="2m16.934096076s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:54.933238904 +0000 UTC m=+157.126282558" watchObservedRunningTime="2026-01-29 15:30:54.934096076 +0000 UTC m=+157.127139730" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.935606 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" event={"ID":"2aa4b6dc-12b4-42d1-aeb0-8c0b22f640fd","Type":"ContainerStarted","Data":"f9bee3a0ae5a1145a8f1e58200ca67709325b0d39849c1a8b6b8d9bd217180f4"} Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.935768 4835 patch_prober.go:28] interesting pod/console-operator-58897d9998-88xw8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.935805 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-88xw8" podUID="0ae2d8fa-6975-4490-817e-feab685f5f01" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/readyz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.936619 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.936640 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.937760 4835 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wrghl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.937856 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.938605 4835 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-t66v2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.938655 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.940761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jxtmr" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.941188 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xdsww container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.941227 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" podUID="1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.948651 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7qx5k" Jan 29 15:30:54 crc kubenswrapper[4835]: I0129 15:30:54.994981 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:54 crc kubenswrapper[4835]: E0129 15:30:54.997452 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.497424783 +0000 UTC m=+157.690468437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.078391 4835 csr.go:261] certificate signing request csr-gqr5q is approved, waiting to be issued Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.080102 4835 csr.go:257] certificate signing request csr-gqr5q is issued Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.097113 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.097505 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.597488187 +0000 UTC m=+157.790531841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.198599 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.198928 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.698904043 +0000 UTC m=+157.891947697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.199129 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.199559 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.69955208 +0000 UTC m=+157.892595734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.300035 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.300297 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.800253639 +0000 UTC m=+157.993297293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.300763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.301181 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.801173022 +0000 UTC m=+157.994216676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.401597 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.401815 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.901784908 +0000 UTC m=+158.094828572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.402190 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.402490 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:55.902460185 +0000 UTC m=+158.095503839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.503700 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.504253 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.004228131 +0000 UTC m=+158.197271815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.573082 4835 patch_prober.go:28] interesting pod/router-default-5444994796-2v9n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:55 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Jan 29 15:30:55 crc kubenswrapper[4835]: [+]process-running ok Jan 29 15:30:55 crc kubenswrapper[4835]: healthz check failed Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.573702 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2v9n5" podUID="d4dc8416-2dc7-4e04-8228-3f90015ecbda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.605531 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.605897 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.105886273 +0000 UTC m=+158.298929927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.706921 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.707389 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.207370331 +0000 UTC m=+158.400413985 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.808812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.809301 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.309281971 +0000 UTC m=+158.502325625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.830064 4835 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jtl2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.830118 4835 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jtl2c container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.830155 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" podUID="de99438b-f023-4f1f-97e0-25dcebd1b0bf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.830178 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" podUID="de99438b-f023-4f1f-97e0-25dcebd1b0bf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.910608 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.910932 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.410862151 +0000 UTC m=+158.603905805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.911149 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:55 crc kubenswrapper[4835]: E0129 15:30:55.911426 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.411415575 +0000 UTC m=+158.604459219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.940142 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gx7b9" event={"ID":"79c1d489-95dd-4f18-a7b3-92fcbaec6e03","Type":"ContainerStarted","Data":"c148af366c4bd6d33151f98984f59761225c5388a308e3748fa4ec52f15bf89f"} Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.940358 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gx7b9" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.943397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" event={"ID":"31485654-ce13-4b9b-a5b0-907dbd8a9e70","Type":"ContainerStarted","Data":"e677c610d6685a8ad142d5d615fa5d1728fb9ae1035f4866e4739967cc8f4765"} Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.949811 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" event={"ID":"19984771-6056-43f4-9d47-b90b4ad1f73a","Type":"ContainerStarted","Data":"5eaffc0787edf8356b55f5842d0e7cec91d7cea3a69402c20419da31df2acc99"} Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.952582 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" event={"ID":"790fce65-a57d-4790-9772-954b8473cc10","Type":"ContainerStarted","Data":"f6f4f769c7d8d8c1a53c14530f5e69222661e701fbebf9dbcbb94a9f6783f8b7"} Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.954919 4835 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jtl2c container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.954965 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" podUID="de99438b-f023-4f1f-97e0-25dcebd1b0bf" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.955131 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.955177 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.955756 4835 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-xdsww container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.955784 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" podUID="1d09b539-8c5e-49d2-b1ae-0f8d8d3faffa" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 29 15:30:55 crc kubenswrapper[4835]: I0129 15:30:55.969608 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gx7b9" podStartSLOduration=8.969585125 podStartE2EDuration="8.969585125s" podCreationTimestamp="2026-01-29 15:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:55.968051486 +0000 UTC m=+158.161095140" watchObservedRunningTime="2026-01-29 15:30:55.969585125 +0000 UTC m=+158.162628779" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.012489 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.014027 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.514002711 +0000 UTC m=+158.707046365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.025686 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" podStartSLOduration=138.025664392 podStartE2EDuration="2m18.025664392s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.024214036 +0000 UTC m=+158.217257700" watchObservedRunningTime="2026-01-29 15:30:56.025664392 +0000 UTC m=+158.218708056" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.027408 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-grg4n" podStartSLOduration=138.027398705 podStartE2EDuration="2m18.027398705s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:55.992741792 +0000 UTC m=+158.185785446" watchObservedRunningTime="2026-01-29 15:30:56.027398705 +0000 UTC m=+158.220442359" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.049087 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8vm4c" podStartSLOduration=138.049070485 podStartE2EDuration="2m18.049070485s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.047922126 +0000 UTC m=+158.240965780" watchObservedRunningTime="2026-01-29 15:30:56.049070485 +0000 UTC m=+158.242114149" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.081895 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 15:25:55 +0000 UTC, rotation deadline is 2026-12-16 15:44:33.731067966 +0000 UTC Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.082221 4835 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7704h13m37.648851095s for next certificate rotation Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.089853 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" podStartSLOduration=138.08982988100001 podStartE2EDuration="2m18.089829881s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.071644977 +0000 UTC m=+158.264688651" watchObservedRunningTime="2026-01-29 15:30:56.089829881 +0000 UTC m=+158.282873535" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.089985 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" podStartSLOduration=138.089979714 podStartE2EDuration="2m18.089979714s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.087291587 +0000 UTC m=+158.280335241" watchObservedRunningTime="2026-01-29 15:30:56.089979714 +0000 UTC m=+158.283023368" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.107535 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cpfv6" podStartSLOduration=137.107515911 podStartE2EDuration="2m17.107515911s" podCreationTimestamp="2026-01-29 15:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.106306241 +0000 UTC m=+158.299349905" watchObservedRunningTime="2026-01-29 15:30:56.107515911 +0000 UTC m=+158.300559565" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.114512 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.115027 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.615014368 +0000 UTC m=+158.808058022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.127360 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-g94r2" podStartSLOduration=138.127341565 podStartE2EDuration="2m18.127341565s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.125794167 +0000 UTC m=+158.318837821" watchObservedRunningTime="2026-01-29 15:30:56.127341565 +0000 UTC m=+158.320385229" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.159730 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z6rg2" podStartSLOduration=138.159714512 podStartE2EDuration="2m18.159714512s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.156553613 +0000 UTC m=+158.349597267" watchObservedRunningTime="2026-01-29 15:30:56.159714512 +0000 UTC m=+158.352758166" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.213488 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-zqtcm" podStartSLOduration=137.213450471 podStartE2EDuration="2m17.213450471s" podCreationTimestamp="2026-01-29 15:28:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.208718563 +0000 UTC m=+158.401762217" watchObservedRunningTime="2026-01-29 15:30:56.213450471 +0000 UTC m=+158.406494125" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.214135 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" podStartSLOduration=56.214129728 podStartE2EDuration="56.214129728s" podCreationTimestamp="2026-01-29 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.183521315 +0000 UTC m=+158.376564969" watchObservedRunningTime="2026-01-29 15:30:56.214129728 +0000 UTC m=+158.407173382" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.215939 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.216103 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.716078026 +0000 UTC m=+158.909121680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.216432 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.216762 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.716752863 +0000 UTC m=+158.909796517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.225751 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz8pv" podStartSLOduration=138.225731537 podStartE2EDuration="2m18.225731537s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.22544437 +0000 UTC m=+158.418488024" watchObservedRunningTime="2026-01-29 15:30:56.225731537 +0000 UTC m=+158.418775191" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.251603 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6jqq5" podStartSLOduration=138.251584631 podStartE2EDuration="2m18.251584631s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:56.249710234 +0000 UTC m=+158.442753898" watchObservedRunningTime="2026-01-29 15:30:56.251584631 +0000 UTC m=+158.444628275" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.317606 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.317780 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.81775617 +0000 UTC m=+159.010799824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.317920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.318343 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.818333214 +0000 UTC m=+159.011376868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.419369 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.419526 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.919508575 +0000 UTC m=+159.112552219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.420138 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.420393 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:56.920385707 +0000 UTC m=+159.113429361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.521370 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.521608 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.021580718 +0000 UTC m=+159.214624372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.522042 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.522413 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.022403679 +0000 UTC m=+159.215447333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.587355 4835 patch_prober.go:28] interesting pod/router-default-5444994796-2v9n5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 15:30:56 crc kubenswrapper[4835]: [-]has-synced failed: reason withheld Jan 29 15:30:56 crc kubenswrapper[4835]: [+]process-running ok Jan 29 15:30:56 crc kubenswrapper[4835]: healthz check failed Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.587940 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2v9n5" podUID="d4dc8416-2dc7-4e04-8228-3f90015ecbda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.623630 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.623828 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.123799135 +0000 UTC m=+159.316842789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.624615 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.624956 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.124943683 +0000 UTC m=+159.317987337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.727115 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.727581 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.22756006 +0000 UTC m=+159.420603714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.727872 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.728211 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.228203366 +0000 UTC m=+159.421247010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.829349 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.829445 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.329431918 +0000 UTC m=+159.522475572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.829633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.829929 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.329921441 +0000 UTC m=+159.522965085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.931070 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:56 crc kubenswrapper[4835]: E0129 15:30:56.931491 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.43146014 +0000 UTC m=+159.624503794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:56 crc kubenswrapper[4835]: I0129 15:30:56.961025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" event={"ID":"96e34f38-5eb1-447d-959a-3310d8208b67","Type":"ContainerStarted","Data":"352ce0ccf6778ee75d33420035b2d423454a59ec437c8eb2cb8014e981cd88db"} Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.033142 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.033619 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.533607186 +0000 UTC m=+159.726650840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.137219 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.137949 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.637933075 +0000 UTC m=+159.830976719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.239511 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.239900 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.739888485 +0000 UTC m=+159.932932139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.254111 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.340334 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.340564 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.840536463 +0000 UTC m=+160.033580117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.340708 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.341039 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.841027495 +0000 UTC m=+160.034071149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.442113 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.442327 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.942302078 +0000 UTC m=+160.135345732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.442633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.442991 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:57.942982635 +0000 UTC m=+160.136026289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.544250 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.544417 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.044400622 +0000 UTC m=+160.237444276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.544813 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.545133 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.045126531 +0000 UTC m=+160.238170185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.579255 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.580017 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.584107 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2v9n5" Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.647057 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.647216 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.147192544 +0000 UTC m=+160.340236208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.648202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.649602 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.149583053 +0000 UTC m=+160.342626717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.749980 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.750127 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.250102708 +0000 UTC m=+160.443146362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.750379 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.750748 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.250739564 +0000 UTC m=+160.443783218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.852082 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.852458 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.352428137 +0000 UTC m=+160.545471811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.954009 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:57 crc kubenswrapper[4835]: E0129 15:30:57.954296 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.454284375 +0000 UTC m=+160.647328029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.965786 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" event={"ID":"96e34f38-5eb1-447d-959a-3310d8208b67","Type":"ContainerStarted","Data":"bcd17806a8a07631cc5710b67f53455a7562b6822edf80fbdfc8e315a877b71c"} Jan 29 15:30:57 crc kubenswrapper[4835]: I0129 15:30:57.965825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" event={"ID":"96e34f38-5eb1-447d-959a-3310d8208b67","Type":"ContainerStarted","Data":"a7f0976f3ed2de43fbcfdb4efb52a4f2137f314a00ff69241891bd15c9f5b612"} Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.055348 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.056277 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.556261826 +0000 UTC m=+160.749305480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.157723 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.158120 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.658104273 +0000 UTC m=+160.851147927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.258614 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.258993 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.758975447 +0000 UTC m=+160.952019111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.359974 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.360299 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.860288451 +0000 UTC m=+161.053332105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.461572 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.461972 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:58.961956404 +0000 UTC m=+161.155000058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.563089 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.563512 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.063499464 +0000 UTC m=+161.256543108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.610901 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kfhvt"] Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.612842 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.629845 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.648335 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfhvt"] Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.663742 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.663891 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-utilities\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.663975 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7ld\" (UniqueName: \"kubernetes.io/projected/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-kube-api-access-jk7ld\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.663996 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-catalog-content\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.664094 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.16407812 +0000 UTC m=+161.357121774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.765650 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7ld\" (UniqueName: \"kubernetes.io/projected/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-kube-api-access-jk7ld\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.765920 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-catalog-content\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.765984 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-utilities\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.766016 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.766307 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.266295757 +0000 UTC m=+161.459339411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.766892 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-catalog-content\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.767003 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-utilities\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.772905 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7c9gq"] Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.773874 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.776191 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.815359 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7ld\" (UniqueName: \"kubernetes.io/projected/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-kube-api-access-jk7ld\") pod \"community-operators-kfhvt\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.836190 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c9gq"] Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.840327 4835 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.844848 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jtl2c" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.867127 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.867364 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-utilities\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.867389 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2js2\" (UniqueName: \"kubernetes.io/projected/c397420c-528f-4c7a-ad5b-45767f0b7af1-kube-api-access-j2js2\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.867425 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-catalog-content\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.867528 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.367513039 +0000 UTC m=+161.560556693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.940033 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.968062 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdrhn"] Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.970263 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.974412 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.974465 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-utilities\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.974500 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2js2\" (UniqueName: \"kubernetes.io/projected/c397420c-528f-4c7a-ad5b-45767f0b7af1-kube-api-access-j2js2\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.974559 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-catalog-content\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.974969 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-catalog-content\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: E0129 15:30:58.975256 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.475242443 +0000 UTC m=+161.668286097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.975623 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-utilities\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.988648 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" event={"ID":"96e34f38-5eb1-447d-959a-3310d8208b67","Type":"ContainerStarted","Data":"78c8ed3cdecdd9b1d9e04f6dd3178dc8ff4c6defac47d5767b0245bf171eeca1"} Jan 29 15:30:58 crc kubenswrapper[4835]: I0129 15:30:58.989883 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdrhn"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.000989 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2js2\" (UniqueName: \"kubernetes.io/projected/c397420c-528f-4c7a-ad5b-45767f0b7af1-kube-api-access-j2js2\") pod \"certified-operators-7c9gq\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.075995 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.076272 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.57625738 +0000 UTC m=+161.769301034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.076623 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.076759 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-utilities\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.076854 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-catalog-content\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.076939 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fhh\" (UniqueName: \"kubernetes.io/projected/c9177be1-3dc4-4827-9ef5-e05afbf025bd-kube-api-access-66fhh\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.077250 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.577242754 +0000 UTC m=+161.770286408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.085811 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.168821 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k2mp9" podStartSLOduration=11.168797694 podStartE2EDuration="11.168797694s" podCreationTimestamp="2026-01-29 15:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:30:59.067326947 +0000 UTC m=+161.260370601" watchObservedRunningTime="2026-01-29 15:30:59.168797694 +0000 UTC m=+161.361841338" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.170742 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2v7j"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.171909 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.179992 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.180206 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fhh\" (UniqueName: \"kubernetes.io/projected/c9177be1-3dc4-4827-9ef5-e05afbf025bd-kube-api-access-66fhh\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.180320 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-utilities\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.180360 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-catalog-content\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.181165 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.681142322 +0000 UTC m=+161.874185976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.181397 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-catalog-content\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.181440 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-utilities\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.183087 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2v7j"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.218647 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fhh\" (UniqueName: \"kubernetes.io/projected/c9177be1-3dc4-4827-9ef5-e05afbf025bd-kube-api-access-66fhh\") pod \"community-operators-jdrhn\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.281271 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-utilities\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.281340 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.281383 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5zxf\" (UniqueName: \"kubernetes.io/projected/b6af2394-aac4-4b19-a6f0-81065d23fbb9-kube-api-access-n5zxf\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.281423 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-catalog-content\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.281848 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.781832681 +0000 UTC m=+161.974876335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.301077 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.318984 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfhvt"] Jan 29 15:30:59 crc kubenswrapper[4835]: W0129 15:30:59.330677 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c3d86e8_b4be_4d4f_b844_516289b1b7ec.slice/crio-e48dc508ba80ba64ee1c0d9f29f0587ebe41bcab4bb23c8dc8ece725c6d8837f WatchSource:0}: Error finding container e48dc508ba80ba64ee1c0d9f29f0587ebe41bcab4bb23c8dc8ece725c6d8837f: Status 404 returned error can't find the container with id e48dc508ba80ba64ee1c0d9f29f0587ebe41bcab4bb23c8dc8ece725c6d8837f Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.384336 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.384583 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-catalog-content\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.384665 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-utilities\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.384738 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5zxf\" (UniqueName: \"kubernetes.io/projected/b6af2394-aac4-4b19-a6f0-81065d23fbb9-kube-api-access-n5zxf\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.385111 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:30:59.885097954 +0000 UTC m=+162.078141608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.385524 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-catalog-content\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.385751 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-utilities\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.418560 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5zxf\" (UniqueName: \"kubernetes.io/projected/b6af2394-aac4-4b19-a6f0-81065d23fbb9-kube-api-access-n5zxf\") pod \"certified-operators-t2v7j\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.526214 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.528096 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.528662 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 15:31:00.02864737 +0000 UTC m=+162.221691024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-cbsh8" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.547765 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.553075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.553408 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.561790 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.562058 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.629748 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.629969 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b855452-ac25-406f-9fe1-41b4694768ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.630001 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b855452-ac25-406f-9fe1-41b4694768ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: E0129 15:30:59.630193 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 15:31:00.13017495 +0000 UTC m=+162.323218604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.656782 4835 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T15:30:58.840364822Z","Handler":null,"Name":""} Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.672712 4835 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.672768 4835 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.695300 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c9gq"] Jan 29 15:30:59 crc kubenswrapper[4835]: W0129 15:30:59.697459 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc397420c_528f_4c7a_ad5b_45767f0b7af1.slice/crio-e6b3ec2921863625af211ade5bf5884c136d30720bca2978d976740514bd9c32 WatchSource:0}: Error finding container e6b3ec2921863625af211ade5bf5884c136d30720bca2978d976740514bd9c32: Status 404 returned error can't find the container with id e6b3ec2921863625af211ade5bf5884c136d30720bca2978d976740514bd9c32 Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.732916 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.733013 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b855452-ac25-406f-9fe1-41b4694768ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.733034 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b855452-ac25-406f-9fe1-41b4694768ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.733090 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b855452-ac25-406f-9fe1-41b4694768ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.754187 4835 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.754231 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.784337 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b855452-ac25-406f-9fe1-41b4694768ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.794013 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.794686 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.802241 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.802403 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.863821 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.888403 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-cbsh8\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.937118 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.937352 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ca728c-6b11-4561-9797-cc35fa8addc1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.937421 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90ca728c-6b11-4561-9797-cc35fa8addc1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:30:59 crc kubenswrapper[4835]: I0129 15:30:59.962514 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.038223 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90ca728c-6b11-4561-9797-cc35fa8addc1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.038288 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ca728c-6b11-4561-9797-cc35fa8addc1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.038355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90ca728c-6b11-4561-9797-cc35fa8addc1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.075885 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerID="c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f" exitCode=0 Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.075986 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfhvt" event={"ID":"3c3d86e8-b4be-4d4f-b844-516289b1b7ec","Type":"ContainerDied","Data":"c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f"} Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.076020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfhvt" event={"ID":"3c3d86e8-b4be-4d4f-b844-516289b1b7ec","Type":"ContainerStarted","Data":"e48dc508ba80ba64ee1c0d9f29f0587ebe41bcab4bb23c8dc8ece725c6d8837f"} Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.088280 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ca728c-6b11-4561-9797-cc35fa8addc1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.092053 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.113732 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdrhn"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.117594 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c9gq" event={"ID":"c397420c-528f-4c7a-ad5b-45767f0b7af1","Type":"ContainerStarted","Data":"e6b3ec2921863625af211ade5bf5884c136d30720bca2978d976740514bd9c32"} Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.160097 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.177055 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2v7j"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.183178 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.268459 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.475743 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.501639 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.517301 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.517329 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.547580 4835 patch_prober.go:28] interesting pod/console-f9d7485db-zskhr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.547630 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zskhr" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.580353 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cbsh8"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.620964 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.644167 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.644217 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.644366 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.644412 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.659352 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.705348 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-xdsww" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.714896 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96bbc399-9838-4fd5-bf1e-66db67707c5c-metrics-certs\") pod \"network-metrics-daemon-p2hj4\" (UID: \"96bbc399-9838-4fd5-bf1e-66db67707c5c\") " pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.775434 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wsxd5"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.776722 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-88xw8" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.777075 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.778993 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.799527 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsxd5"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.815904 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-p2hj4" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.832177 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.867248 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-utilities\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.867286 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcchx\" (UniqueName: \"kubernetes.io/projected/ea72779c-6414-41e3-9b36-9d64cf847442-kube-api-access-kcchx\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.867374 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-catalog-content\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.914874 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.939616 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.942742 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.954187 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.959104 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w8z6" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.969567 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-catalog-content\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.969652 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-utilities\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.969673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcchx\" (UniqueName: \"kubernetes.io/projected/ea72779c-6414-41e3-9b36-9d64cf847442-kube-api-access-kcchx\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.970586 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-catalog-content\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.970798 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-utilities\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:00 crc kubenswrapper[4835]: I0129 15:31:00.992773 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.008563 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcchx\" (UniqueName: \"kubernetes.io/projected/ea72779c-6414-41e3-9b36-9d64cf847442-kube-api-access-kcchx\") pod \"redhat-marketplace-wsxd5\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.032035 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sp4ll" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.105950 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.138092 4835 generic.go:334] "Generic (PLEG): container finished" podID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerID="4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8" exitCode=0 Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.138172 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2v7j" event={"ID":"b6af2394-aac4-4b19-a6f0-81065d23fbb9","Type":"ContainerDied","Data":"4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.138220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2v7j" event={"ID":"b6af2394-aac4-4b19-a6f0-81065d23fbb9","Type":"ContainerStarted","Data":"bf5d5b8e3f4a740281e49f8d8e35f643c9cd09cdf7870e0a4d08dd52ef8f0f5a"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.152761 4835 generic.go:334] "Generic (PLEG): container finished" podID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerID="443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30" exitCode=0 Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.152867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c9gq" event={"ID":"c397420c-528f-4c7a-ad5b-45767f0b7af1","Type":"ContainerDied","Data":"443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.165304 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b855452-ac25-406f-9fe1-41b4694768ab","Type":"ContainerStarted","Data":"9a8dd95e1a5c4aea19e112c9e14468c3637d029afe200a4e422e7f1da419da6c"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.190134 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hsm9c"] Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.191158 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.201334 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hsm9c"] Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.217740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90ca728c-6b11-4561-9797-cc35fa8addc1","Type":"ContainerStarted","Data":"ac80d6782df40981f7fe7d947d3100f161f4e4850aa39a7bf29ee4a2122db7d8"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.232459 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" event={"ID":"7b49fcb5-205f-42cc-92a5-eb8190c06f15","Type":"ContainerStarted","Data":"14a2bed51a9199cc44feeadffe1df61515df46cea2b9cde503d55590ed1094c7"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.232812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" event={"ID":"7b49fcb5-205f-42cc-92a5-eb8190c06f15","Type":"ContainerStarted","Data":"bd6fc3a2ef4c641c081139299eec055b58c559071563692c4da081f2ee30b9b4"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.239234 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerID="ef5884a2c4173eaffe1efd595beceab84293f0b91a01c1986a1f86e1b72a23e0" exitCode=0 Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.241786 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdrhn" event={"ID":"c9177be1-3dc4-4827-9ef5-e05afbf025bd","Type":"ContainerDied","Data":"ef5884a2c4173eaffe1efd595beceab84293f0b91a01c1986a1f86e1b72a23e0"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.241843 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdrhn" event={"ID":"c9177be1-3dc4-4827-9ef5-e05afbf025bd","Type":"ContainerStarted","Data":"21073effe40de7201d7884c8d273ce3127374a2244acd879a55b7f1e308d7a93"} Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.272762 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" podStartSLOduration=143.272732566 podStartE2EDuration="2m23.272732566s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:01.271142016 +0000 UTC m=+163.464185670" watchObservedRunningTime="2026-01-29 15:31:01.272732566 +0000 UTC m=+163.465776210" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.280742 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-catalog-content\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.280873 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbgp\" (UniqueName: \"kubernetes.io/projected/cf3a1edc-20c5-4348-bdce-8bf4fc613888-kube-api-access-snbgp\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.280902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-utilities\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.382293 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbgp\" (UniqueName: \"kubernetes.io/projected/cf3a1edc-20c5-4348-bdce-8bf4fc613888-kube-api-access-snbgp\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.382347 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-utilities\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.382513 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-catalog-content\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.384399 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-catalog-content\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.384694 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-utilities\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.417746 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbgp\" (UniqueName: \"kubernetes.io/projected/cf3a1edc-20c5-4348-bdce-8bf4fc613888-kube-api-access-snbgp\") pod \"redhat-marketplace-hsm9c\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.448120 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsxd5"] Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.489892 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-p2hj4"] Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.519150 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.777387 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5xfqg"] Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.778803 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.781594 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.801277 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5xfqg"] Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.892970 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxskb\" (UniqueName: \"kubernetes.io/projected/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-kube-api-access-wxskb\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.893038 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-catalog-content\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.893079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-utilities\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.904186 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hsm9c"] Jan 29 15:31:01 crc kubenswrapper[4835]: W0129 15:31:01.912056 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf3a1edc_20c5_4348_bdce_8bf4fc613888.slice/crio-7ffd858be0c804c3baabe428392baf12b0f6d6d50f71758fdce8ab76a49c74df WatchSource:0}: Error finding container 7ffd858be0c804c3baabe428392baf12b0f6d6d50f71758fdce8ab76a49c74df: Status 404 returned error can't find the container with id 7ffd858be0c804c3baabe428392baf12b0f6d6d50f71758fdce8ab76a49c74df Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.994599 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxskb\" (UniqueName: \"kubernetes.io/projected/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-kube-api-access-wxskb\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.994906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-catalog-content\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.994946 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-utilities\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.995431 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-utilities\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:01 crc kubenswrapper[4835]: I0129 15:31:01.995635 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-catalog-content\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.031624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxskb\" (UniqueName: \"kubernetes.io/projected/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-kube-api-access-wxskb\") pod \"redhat-operators-5xfqg\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.171969 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbmzj"] Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.172997 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.193449 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbmzj"] Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.197083 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.269659 4835 generic.go:334] "Generic (PLEG): container finished" podID="8b855452-ac25-406f-9fe1-41b4694768ab" containerID="5205c9fcfc08fe54733ba6a6eb6a24989d1b7125c5053ede016ba6989442f59f" exitCode=0 Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.269752 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b855452-ac25-406f-9fe1-41b4694768ab","Type":"ContainerDied","Data":"5205c9fcfc08fe54733ba6a6eb6a24989d1b7125c5053ede016ba6989442f59f"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.300517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" event={"ID":"96bbc399-9838-4fd5-bf1e-66db67707c5c","Type":"ContainerStarted","Data":"66a6777407be60f7e185b203c10583bb5acda63c24d29e99a5c22eea70222356"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.304862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-catalog-content\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.305254 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnc42\" (UniqueName: \"kubernetes.io/projected/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-kube-api-access-qnc42\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.305329 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-utilities\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.307742 4835 generic.go:334] "Generic (PLEG): container finished" podID="90ca728c-6b11-4561-9797-cc35fa8addc1" containerID="9a169ae17c78a6dd37bf57c447b63a13aedefc8fddceaa88171b3ad745d4d143" exitCode=0 Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.308151 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90ca728c-6b11-4561-9797-cc35fa8addc1","Type":"ContainerDied","Data":"9a169ae17c78a6dd37bf57c447b63a13aedefc8fddceaa88171b3ad745d4d143"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.359001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerStarted","Data":"b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.359047 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerStarted","Data":"7ffd858be0c804c3baabe428392baf12b0f6d6d50f71758fdce8ab76a49c74df"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.378393 4835 generic.go:334] "Generic (PLEG): container finished" podID="ea72779c-6414-41e3-9b36-9d64cf847442" containerID="a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea" exitCode=0 Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.379805 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsxd5" event={"ID":"ea72779c-6414-41e3-9b36-9d64cf847442","Type":"ContainerDied","Data":"a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.379867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsxd5" event={"ID":"ea72779c-6414-41e3-9b36-9d64cf847442","Type":"ContainerStarted","Data":"1212da8d80c0e1d744d66694d3c86512a47531dea4021064deccef0d724b2d23"} Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.379887 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.408640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-utilities\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.408709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-catalog-content\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.408747 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnc42\" (UniqueName: \"kubernetes.io/projected/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-kube-api-access-qnc42\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.409269 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-utilities\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.409323 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-catalog-content\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.429959 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnc42\" (UniqueName: \"kubernetes.io/projected/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-kube-api-access-qnc42\") pod \"redhat-operators-gbmzj\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.497418 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.830766 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5xfqg"] Jan 29 15:31:02 crc kubenswrapper[4835]: I0129 15:31:02.857698 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbmzj"] Jan 29 15:31:02 crc kubenswrapper[4835]: W0129 15:31:02.876860 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9418ad1e_e0ae_4f2d_a8d6_58826e1367d1.slice/crio-0be2db396319c9e6c7ef78f80664b88da86a053ec7d83e7c096b41646fefcd93 WatchSource:0}: Error finding container 0be2db396319c9e6c7ef78f80664b88da86a053ec7d83e7c096b41646fefcd93: Status 404 returned error can't find the container with id 0be2db396319c9e6c7ef78f80664b88da86a053ec7d83e7c096b41646fefcd93 Jan 29 15:31:02 crc kubenswrapper[4835]: W0129 15:31:02.881232 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ec7cda2_a20a_4b97_9bda_50a75e00b65b.slice/crio-76083dc0bf1432de131d21eef43e382a4e96cb19658388ee00309f4f5783a0ef WatchSource:0}: Error finding container 76083dc0bf1432de131d21eef43e382a4e96cb19658388ee00309f4f5783a0ef: Status 404 returned error can't find the container with id 76083dc0bf1432de131d21eef43e382a4e96cb19658388ee00309f4f5783a0ef Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.429995 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" event={"ID":"96bbc399-9838-4fd5-bf1e-66db67707c5c","Type":"ContainerStarted","Data":"f4011a2ba3c59fabae27e89723d8d0c83d941e893d81fcd6d2b9bf4a9b74960c"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.430280 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-p2hj4" event={"ID":"96bbc399-9838-4fd5-bf1e-66db67707c5c","Type":"ContainerStarted","Data":"25a88c0675fa2eb443c3a914577d427fe520cc4bbf9e9ddf9d6c81dbce31be8b"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.455251 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-p2hj4" podStartSLOduration=145.455232314 podStartE2EDuration="2m25.455232314s" podCreationTimestamp="2026-01-29 15:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:03.454391863 +0000 UTC m=+165.647435517" watchObservedRunningTime="2026-01-29 15:31:03.455232314 +0000 UTC m=+165.648275958" Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.455329 4835 generic.go:334] "Generic (PLEG): container finished" podID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerID="d982d6e0c2faee411b1a41903a0595b90fd555aee8a48bc9aaa8882b5b9a1724" exitCode=0 Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.455402 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerDied","Data":"d982d6e0c2faee411b1a41903a0595b90fd555aee8a48bc9aaa8882b5b9a1724"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.455436 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerStarted","Data":"0be2db396319c9e6c7ef78f80664b88da86a053ec7d83e7c096b41646fefcd93"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.500113 4835 generic.go:334] "Generic (PLEG): container finished" podID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerID="b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961" exitCode=0 Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.500357 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerDied","Data":"b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.505207 4835 generic.go:334] "Generic (PLEG): container finished" podID="919e8487-ff15-4a8f-a5e0-dca045e62ae2" containerID="1d49033fac16d3274421a6493bc32706476436fdbe26e6b04d7cc6d38fe32b24" exitCode=0 Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.505278 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" event={"ID":"919e8487-ff15-4a8f-a5e0-dca045e62ae2","Type":"ContainerDied","Data":"1d49033fac16d3274421a6493bc32706476436fdbe26e6b04d7cc6d38fe32b24"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.509866 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerID="7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81" exitCode=0 Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.510135 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerDied","Data":"7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.510174 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerStarted","Data":"76083dc0bf1432de131d21eef43e382a4e96cb19658388ee00309f4f5783a0ef"} Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.884895 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.949996 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90ca728c-6b11-4561-9797-cc35fa8addc1-kubelet-dir\") pod \"90ca728c-6b11-4561-9797-cc35fa8addc1\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.950063 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ca728c-6b11-4561-9797-cc35fa8addc1-kube-api-access\") pod \"90ca728c-6b11-4561-9797-cc35fa8addc1\" (UID: \"90ca728c-6b11-4561-9797-cc35fa8addc1\") " Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.950153 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90ca728c-6b11-4561-9797-cc35fa8addc1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "90ca728c-6b11-4561-9797-cc35fa8addc1" (UID: "90ca728c-6b11-4561-9797-cc35fa8addc1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.950379 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/90ca728c-6b11-4561-9797-cc35fa8addc1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.962005 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ca728c-6b11-4561-9797-cc35fa8addc1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "90ca728c-6b11-4561-9797-cc35fa8addc1" (UID: "90ca728c-6b11-4561-9797-cc35fa8addc1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:03 crc kubenswrapper[4835]: I0129 15:31:03.967758 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.052436 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b855452-ac25-406f-9fe1-41b4694768ab-kubelet-dir\") pod \"8b855452-ac25-406f-9fe1-41b4694768ab\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.052523 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b855452-ac25-406f-9fe1-41b4694768ab-kube-api-access\") pod \"8b855452-ac25-406f-9fe1-41b4694768ab\" (UID: \"8b855452-ac25-406f-9fe1-41b4694768ab\") " Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.053031 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ca728c-6b11-4561-9797-cc35fa8addc1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.053861 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b855452-ac25-406f-9fe1-41b4694768ab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b855452-ac25-406f-9fe1-41b4694768ab" (UID: "8b855452-ac25-406f-9fe1-41b4694768ab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.063655 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b855452-ac25-406f-9fe1-41b4694768ab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b855452-ac25-406f-9fe1-41b4694768ab" (UID: "8b855452-ac25-406f-9fe1-41b4694768ab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.154632 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b855452-ac25-406f-9fe1-41b4694768ab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.154676 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b855452-ac25-406f-9fe1-41b4694768ab-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.526103 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.526430 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"90ca728c-6b11-4561-9797-cc35fa8addc1","Type":"ContainerDied","Data":"ac80d6782df40981f7fe7d947d3100f161f4e4850aa39a7bf29ee4a2122db7d8"} Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.526456 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac80d6782df40981f7fe7d947d3100f161f4e4850aa39a7bf29ee4a2122db7d8" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.538188 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.538293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"8b855452-ac25-406f-9fe1-41b4694768ab","Type":"ContainerDied","Data":"9a8dd95e1a5c4aea19e112c9e14468c3637d029afe200a4e422e7f1da419da6c"} Jan 29 15:31:04 crc kubenswrapper[4835]: I0129 15:31:04.538330 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a8dd95e1a5c4aea19e112c9e14468c3637d029afe200a4e422e7f1da419da6c" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.070111 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.170275 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919e8487-ff15-4a8f-a5e0-dca045e62ae2-secret-volume\") pod \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.170408 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwnhm\" (UniqueName: \"kubernetes.io/projected/919e8487-ff15-4a8f-a5e0-dca045e62ae2-kube-api-access-kwnhm\") pod \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.170620 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919e8487-ff15-4a8f-a5e0-dca045e62ae2-config-volume\") pod \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\" (UID: \"919e8487-ff15-4a8f-a5e0-dca045e62ae2\") " Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.171637 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/919e8487-ff15-4a8f-a5e0-dca045e62ae2-config-volume" (OuterVolumeSpecName: "config-volume") pod "919e8487-ff15-4a8f-a5e0-dca045e62ae2" (UID: "919e8487-ff15-4a8f-a5e0-dca045e62ae2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.175092 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/919e8487-ff15-4a8f-a5e0-dca045e62ae2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "919e8487-ff15-4a8f-a5e0-dca045e62ae2" (UID: "919e8487-ff15-4a8f-a5e0-dca045e62ae2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.175821 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919e8487-ff15-4a8f-a5e0-dca045e62ae2-kube-api-access-kwnhm" (OuterVolumeSpecName: "kube-api-access-kwnhm") pod "919e8487-ff15-4a8f-a5e0-dca045e62ae2" (UID: "919e8487-ff15-4a8f-a5e0-dca045e62ae2"). InnerVolumeSpecName "kube-api-access-kwnhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.272296 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/919e8487-ff15-4a8f-a5e0-dca045e62ae2-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.272336 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwnhm\" (UniqueName: \"kubernetes.io/projected/919e8487-ff15-4a8f-a5e0-dca045e62ae2-kube-api-access-kwnhm\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.272348 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/919e8487-ff15-4a8f-a5e0-dca045e62ae2-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.572233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" event={"ID":"919e8487-ff15-4a8f-a5e0-dca045e62ae2","Type":"ContainerDied","Data":"4764915d206088f3e93ed4df9d8c69826946c281bdcf7b06893a5f927530c260"} Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.572294 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4764915d206088f3e93ed4df9d8c69826946c281bdcf7b06893a5f927530c260" Jan 29 15:31:05 crc kubenswrapper[4835]: I0129 15:31:05.572460 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g" Jan 29 15:31:06 crc kubenswrapper[4835]: I0129 15:31:06.063265 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gx7b9" Jan 29 15:31:10 crc kubenswrapper[4835]: I0129 15:31:10.515496 4835 patch_prober.go:28] interesting pod/console-f9d7485db-zskhr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 29 15:31:10 crc kubenswrapper[4835]: I0129 15:31:10.516190 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zskhr" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 29 15:31:10 crc kubenswrapper[4835]: I0129 15:31:10.638924 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:31:10 crc kubenswrapper[4835]: I0129 15:31:10.639028 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:31:10 crc kubenswrapper[4835]: I0129 15:31:10.638943 4835 patch_prober.go:28] interesting pod/downloads-7954f5f757-dcp7n container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 15:31:10 crc kubenswrapper[4835]: I0129 15:31:10.639125 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dcp7n" podUID="1e97f198-d45d-4071-a2b9-f41b632caf2f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 15:31:13 crc kubenswrapper[4835]: I0129 15:31:13.120078 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t66v2"] Jan 29 15:31:13 crc kubenswrapper[4835]: I0129 15:31:13.125992 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" containerName="controller-manager" containerID="cri-o://52ff2e48b7cea80c6199929ad6dd5d2702dcb17bc59fcbfd4f539c7c8fef1cd5" gracePeriod=30 Jan 29 15:31:13 crc kubenswrapper[4835]: I0129 15:31:13.152084 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz"] Jan 29 15:31:13 crc kubenswrapper[4835]: I0129 15:31:13.152326 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" podUID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" containerName="route-controller-manager" containerID="cri-o://79af2b0545ae9ce1b9e99444477ce9206e9d905ab6af27612422c1f7860059f8" gracePeriod=30 Jan 29 15:31:14 crc kubenswrapper[4835]: I0129 15:31:14.655589 4835 generic.go:334] "Generic (PLEG): container finished" podID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" containerID="79af2b0545ae9ce1b9e99444477ce9206e9d905ab6af27612422c1f7860059f8" exitCode=0 Jan 29 15:31:14 crc kubenswrapper[4835]: I0129 15:31:14.655675 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" event={"ID":"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1","Type":"ContainerDied","Data":"79af2b0545ae9ce1b9e99444477ce9206e9d905ab6af27612422c1f7860059f8"} Jan 29 15:31:14 crc kubenswrapper[4835]: I0129 15:31:14.658106 4835 generic.go:334] "Generic (PLEG): container finished" podID="3786342c-4f95-48d8-930d-ff9106d6b498" containerID="52ff2e48b7cea80c6199929ad6dd5d2702dcb17bc59fcbfd4f539c7c8fef1cd5" exitCode=0 Jan 29 15:31:14 crc kubenswrapper[4835]: I0129 15:31:14.658137 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" event={"ID":"3786342c-4f95-48d8-930d-ff9106d6b498","Type":"ContainerDied","Data":"52ff2e48b7cea80c6199929ad6dd5d2702dcb17bc59fcbfd4f539c7c8fef1cd5"} Jan 29 15:31:16 crc kubenswrapper[4835]: I0129 15:31:16.573559 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.844084 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.853216 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.878691 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-695fdf4d89-sz8zt"] Jan 29 15:31:17 crc kubenswrapper[4835]: E0129 15:31:17.878914 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" containerName="route-controller-manager" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.878928 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" containerName="route-controller-manager" Jan 29 15:31:17 crc kubenswrapper[4835]: E0129 15:31:17.878940 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ca728c-6b11-4561-9797-cc35fa8addc1" containerName="pruner" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.878946 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ca728c-6b11-4561-9797-cc35fa8addc1" containerName="pruner" Jan 29 15:31:17 crc kubenswrapper[4835]: E0129 15:31:17.878954 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" containerName="controller-manager" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.878982 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" containerName="controller-manager" Jan 29 15:31:17 crc kubenswrapper[4835]: E0129 15:31:17.878998 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b855452-ac25-406f-9fe1-41b4694768ab" containerName="pruner" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879005 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b855452-ac25-406f-9fe1-41b4694768ab" containerName="pruner" Jan 29 15:31:17 crc kubenswrapper[4835]: E0129 15:31:17.879014 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919e8487-ff15-4a8f-a5e0-dca045e62ae2" containerName="collect-profiles" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879019 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="919e8487-ff15-4a8f-a5e0-dca045e62ae2" containerName="collect-profiles" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879109 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" containerName="controller-manager" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879120 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" containerName="route-controller-manager" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879128 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b855452-ac25-406f-9fe1-41b4694768ab" containerName="pruner" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879138 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="919e8487-ff15-4a8f-a5e0-dca045e62ae2" containerName="collect-profiles" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879148 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ca728c-6b11-4561-9797-cc35fa8addc1" containerName="pruner" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.879649 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:17 crc kubenswrapper[4835]: I0129 15:31:17.908722 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-695fdf4d89-sz8zt"] Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.002819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786342c-4f95-48d8-930d-ff9106d6b498-serving-cert\") pod \"3786342c-4f95-48d8-930d-ff9106d6b498\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.003132 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-config\") pod \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.003270 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-serving-cert\") pod \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.003443 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-proxy-ca-bundles\") pod \"3786342c-4f95-48d8-930d-ff9106d6b498\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.003667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-client-ca\") pod \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.003765 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-config\") pod \"3786342c-4f95-48d8-930d-ff9106d6b498\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.003873 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xglbm\" (UniqueName: \"kubernetes.io/projected/3786342c-4f95-48d8-930d-ff9106d6b498-kube-api-access-xglbm\") pod \"3786342c-4f95-48d8-930d-ff9106d6b498\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.004043 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-client-ca\") pod \"3786342c-4f95-48d8-930d-ff9106d6b498\" (UID: \"3786342c-4f95-48d8-930d-ff9106d6b498\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.004161 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8gvg\" (UniqueName: \"kubernetes.io/projected/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-kube-api-access-d8gvg\") pod \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\" (UID: \"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1\") " Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.004401 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-serving-cert\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.004579 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-client-ca\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.004685 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-config\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.004779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-proxy-ca-bundles\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.005482 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7w8\" (UniqueName: \"kubernetes.io/projected/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-kube-api-access-mx7w8\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.005231 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-config" (OuterVolumeSpecName: "config") pod "3786342c-4f95-48d8-930d-ff9106d6b498" (UID: "3786342c-4f95-48d8-930d-ff9106d6b498"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.005351 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3786342c-4f95-48d8-930d-ff9106d6b498" (UID: "3786342c-4f95-48d8-930d-ff9106d6b498"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.005401 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-client-ca" (OuterVolumeSpecName: "client-ca") pod "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" (UID: "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.005569 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-client-ca" (OuterVolumeSpecName: "client-ca") pod "3786342c-4f95-48d8-930d-ff9106d6b498" (UID: "3786342c-4f95-48d8-930d-ff9106d6b498"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.006436 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-config" (OuterVolumeSpecName: "config") pod "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" (UID: "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.009371 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3786342c-4f95-48d8-930d-ff9106d6b498-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3786342c-4f95-48d8-930d-ff9106d6b498" (UID: "3786342c-4f95-48d8-930d-ff9106d6b498"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.011618 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3786342c-4f95-48d8-930d-ff9106d6b498-kube-api-access-xglbm" (OuterVolumeSpecName: "kube-api-access-xglbm") pod "3786342c-4f95-48d8-930d-ff9106d6b498" (UID: "3786342c-4f95-48d8-930d-ff9106d6b498"). InnerVolumeSpecName "kube-api-access-xglbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.014893 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-kube-api-access-d8gvg" (OuterVolumeSpecName: "kube-api-access-d8gvg") pod "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" (UID: "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1"). InnerVolumeSpecName "kube-api-access-d8gvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.022653 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" (UID: "c10cd28c-1ab1-4ea8-9adc-bf962764f1c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107260 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-client-ca\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107391 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-config\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107444 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-proxy-ca-bundles\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107525 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7w8\" (UniqueName: \"kubernetes.io/projected/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-kube-api-access-mx7w8\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107587 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-serving-cert\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107679 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107703 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107721 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107738 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107754 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xglbm\" (UniqueName: \"kubernetes.io/projected/3786342c-4f95-48d8-930d-ff9106d6b498-kube-api-access-xglbm\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107772 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3786342c-4f95-48d8-930d-ff9106d6b498-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107791 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8gvg\" (UniqueName: \"kubernetes.io/projected/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-kube-api-access-d8gvg\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107809 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3786342c-4f95-48d8-930d-ff9106d6b498-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.107830 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.108633 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-proxy-ca-bundles\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.108876 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-config\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.110541 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-client-ca\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.114909 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-serving-cert\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.136397 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7w8\" (UniqueName: \"kubernetes.io/projected/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-kube-api-access-mx7w8\") pod \"controller-manager-695fdf4d89-sz8zt\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.196459 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.682412 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" event={"ID":"3786342c-4f95-48d8-930d-ff9106d6b498","Type":"ContainerDied","Data":"46f3c1c54c2deec9f98d7818ea4225c7820020d4a918c925e0733847b83fd7d1"} Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.682451 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-t66v2" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.682520 4835 scope.go:117] "RemoveContainer" containerID="52ff2e48b7cea80c6199929ad6dd5d2702dcb17bc59fcbfd4f539c7c8fef1cd5" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.685561 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" event={"ID":"c10cd28c-1ab1-4ea8-9adc-bf962764f1c1","Type":"ContainerDied","Data":"fd23c4af991d11e4bab37acc8dc681f64d1d1047fc5b970f74a66ed3d9a3c3d1"} Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.685604 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz" Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.700397 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t66v2"] Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.703121 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-t66v2"] Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.711845 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz"] Jan 29 15:31:18 crc kubenswrapper[4835]: I0129 15:31:18.716161 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g8gtz"] Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.169121 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.305902 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw"] Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.306609 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.310354 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.310541 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.310791 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.310921 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.313827 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.319438 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw"] Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.321801 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.437236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-config\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.437632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dth9v\" (UniqueName: \"kubernetes.io/projected/547b3a6f-9728-4868-9cf7-f2ca233748b8-kube-api-access-dth9v\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.437786 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547b3a6f-9728-4868-9cf7-f2ca233748b8-serving-cert\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.437862 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-client-ca\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.505441 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3786342c-4f95-48d8-930d-ff9106d6b498" path="/var/lib/kubelet/pods/3786342c-4f95-48d8-930d-ff9106d6b498/volumes" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.507156 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10cd28c-1ab1-4ea8-9adc-bf962764f1c1" path="/var/lib/kubelet/pods/c10cd28c-1ab1-4ea8-9adc-bf962764f1c1/volumes" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.518588 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.528153 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.539226 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547b3a6f-9728-4868-9cf7-f2ca233748b8-serving-cert\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.539618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-client-ca\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.539862 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-config\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.540055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dth9v\" (UniqueName: \"kubernetes.io/projected/547b3a6f-9728-4868-9cf7-f2ca233748b8-kube-api-access-dth9v\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.541934 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-client-ca\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.542315 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-config\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.550904 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547b3a6f-9728-4868-9cf7-f2ca233748b8-serving-cert\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.558576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dth9v\" (UniqueName: \"kubernetes.io/projected/547b3a6f-9728-4868-9cf7-f2ca233748b8-kube-api-access-dth9v\") pod \"route-controller-manager-5d8b8d4467-tnfsw\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.639772 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:20 crc kubenswrapper[4835]: I0129 15:31:20.650835 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dcp7n" Jan 29 15:31:21 crc kubenswrapper[4835]: I0129 15:31:21.614652 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:31:21 crc kubenswrapper[4835]: I0129 15:31:21.614705 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:31:30 crc kubenswrapper[4835]: I0129 15:31:30.612595 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zsc2h" Jan 29 15:31:31 crc kubenswrapper[4835]: I0129 15:31:31.236144 4835 scope.go:117] "RemoveContainer" containerID="79af2b0545ae9ce1b9e99444477ce9206e9d905ab6af27612422c1f7860059f8" Jan 29 15:31:31 crc kubenswrapper[4835]: E0129 15:31:31.695783 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:31:31 crc kubenswrapper[4835]: E0129 15:31:31.696066 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2js2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7c9gq_openshift-marketplace(c397420c-528f-4c7a-ad5b-45767f0b7af1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:31 crc kubenswrapper[4835]: E0129 15:31:31.697827 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7c9gq" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" Jan 29 15:31:33 crc kubenswrapper[4835]: I0129 15:31:33.101085 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-695fdf4d89-sz8zt"] Jan 29 15:31:33 crc kubenswrapper[4835]: I0129 15:31:33.191967 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw"] Jan 29 15:31:34 crc kubenswrapper[4835]: E0129 15:31:34.471309 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:31:34 crc kubenswrapper[4835]: E0129 15:31:34.472027 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5zxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t2v7j_openshift-marketplace(b6af2394-aac4-4b19-a6f0-81065d23fbb9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:34 crc kubenswrapper[4835]: E0129 15:31:34.474547 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t2v7j" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" Jan 29 15:31:37 crc kubenswrapper[4835]: E0129 15:31:37.593311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t2v7j" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" Jan 29 15:31:37 crc kubenswrapper[4835]: E0129 15:31:37.593891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7c9gq" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" Jan 29 15:31:40 crc kubenswrapper[4835]: E0129 15:31:40.342950 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:31:40 crc kubenswrapper[4835]: E0129 15:31:40.343611 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snbgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hsm9c_openshift-marketplace(cf3a1edc-20c5-4348-bdce-8bf4fc613888): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:40 crc kubenswrapper[4835]: E0129 15:31:40.344892 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hsm9c" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.600160 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.601196 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.604672 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.605951 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.609866 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.794822 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e195bf84-aee9-4f63-b408-52f08c729e35-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.794878 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e195bf84-aee9-4f63-b408-52f08c729e35-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.896166 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e195bf84-aee9-4f63-b408-52f08c729e35-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.896310 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e195bf84-aee9-4f63-b408-52f08c729e35-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.896773 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e195bf84-aee9-4f63-b408-52f08c729e35-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.916005 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e195bf84-aee9-4f63-b408-52f08c729e35-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: I0129 15:31:40.934814 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:40 crc kubenswrapper[4835]: E0129 15:31:40.985072 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:31:40 crc kubenswrapper[4835]: E0129 15:31:40.985212 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcchx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wsxd5_openshift-marketplace(ea72779c-6414-41e3-9b36-9d64cf847442): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:40 crc kubenswrapper[4835]: E0129 15:31:40.986516 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wsxd5" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.591264 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.593137 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.607049 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.652172 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b456e9b8-ec76-4995-a08d-1577772dc71d-kube-api-access\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.652298 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.652321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-var-lock\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.753640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b456e9b8-ec76-4995-a08d-1577772dc71d-kube-api-access\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.753780 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.753817 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-var-lock\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.753953 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-var-lock\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.753951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.773599 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b456e9b8-ec76-4995-a08d-1577772dc71d-kube-api-access\") pod \"installer-9-crc\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:44 crc kubenswrapper[4835]: I0129 15:31:44.911414 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:31:47 crc kubenswrapper[4835]: E0129 15:31:47.220483 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wsxd5" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" Jan 29 15:31:47 crc kubenswrapper[4835]: E0129 15:31:47.220483 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hsm9c" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" Jan 29 15:31:50 crc kubenswrapper[4835]: E0129 15:31:50.933446 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:31:50 crc kubenswrapper[4835]: E0129 15:31:50.934038 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66fhh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jdrhn_openshift-marketplace(c9177be1-3dc4-4827-9ef5-e05afbf025bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:50 crc kubenswrapper[4835]: E0129 15:31:50.935312 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jdrhn" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.615599 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.615686 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.615746 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.616506 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.616650 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d" gracePeriod=600 Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.772699 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.773199 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jk7ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kfhvt_openshift-marketplace(3c3d86e8-b4be-4d4f-b844-516289b1b7ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.774592 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kfhvt" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.898927 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.899688 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxskb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5xfqg_openshift-marketplace(3ec7cda2-a20a-4b97-9bda-50a75e00b65b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.901336 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5xfqg" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.911393 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d" exitCode=0 Jan 29 15:31:51 crc kubenswrapper[4835]: I0129 15:31:51.911792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d"} Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.923741 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jdrhn" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" Jan 29 15:31:51 crc kubenswrapper[4835]: E0129 15:31:51.923784 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kfhvt" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.129459 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.267384 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw"] Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.284837 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-695fdf4d89-sz8zt"] Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.290441 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 15:31:52 crc kubenswrapper[4835]: W0129 15:31:52.292947 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb456e9b8_ec76_4995_a08d_1577772dc71d.slice/crio-7290b31de26aed9fbb3cbff22ac1372b54c78c55aebd32f92cafe27c4d6cac86 WatchSource:0}: Error finding container 7290b31de26aed9fbb3cbff22ac1372b54c78c55aebd32f92cafe27c4d6cac86: Status 404 returned error can't find the container with id 7290b31de26aed9fbb3cbff22ac1372b54c78c55aebd32f92cafe27c4d6cac86 Jan 29 15:31:52 crc kubenswrapper[4835]: W0129 15:31:52.302764 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod547b3a6f_9728_4868_9cf7_f2ca233748b8.slice/crio-8e351d78a931f941bea29da6e923fbfcc33acae151436770b354065e1433d322 WatchSource:0}: Error finding container 8e351d78a931f941bea29da6e923fbfcc33acae151436770b354065e1433d322: Status 404 returned error can't find the container with id 8e351d78a931f941bea29da6e923fbfcc33acae151436770b354065e1433d322 Jan 29 15:31:52 crc kubenswrapper[4835]: W0129 15:31:52.303759 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14c6976_e7a8_4f2f_bb4d_7e7f9181a48e.slice/crio-8681a6c8b1439cf4f046a63003c309d8b1826d2f174c333521e0f6754cff8162 WatchSource:0}: Error finding container 8681a6c8b1439cf4f046a63003c309d8b1826d2f174c333521e0f6754cff8162: Status 404 returned error can't find the container with id 8681a6c8b1439cf4f046a63003c309d8b1826d2f174c333521e0f6754cff8162 Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.938032 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e195bf84-aee9-4f63-b408-52f08c729e35","Type":"ContainerStarted","Data":"34d7c3c9b0fea67aa3e4a9b647e176a60fd06d62fe982f4259f65f143f63039f"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.938739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e195bf84-aee9-4f63-b408-52f08c729e35","Type":"ContainerStarted","Data":"bd8d43fe0f0940b4a59a194f1e82f99191d09575fdc3db447d46cca2dc2d3a0a"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.945027 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" event={"ID":"547b3a6f-9728-4868-9cf7-f2ca233748b8","Type":"ContainerStarted","Data":"b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.945079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" event={"ID":"547b3a6f-9728-4868-9cf7-f2ca233748b8","Type":"ContainerStarted","Data":"8e351d78a931f941bea29da6e923fbfcc33acae151436770b354065e1433d322"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.945188 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" podUID="547b3a6f-9728-4868-9cf7-f2ca233748b8" containerName="route-controller-manager" containerID="cri-o://b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530" gracePeriod=30 Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.945661 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.950565 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" event={"ID":"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e","Type":"ContainerStarted","Data":"e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.950611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" event={"ID":"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e","Type":"ContainerStarted","Data":"8681a6c8b1439cf4f046a63003c309d8b1826d2f174c333521e0f6754cff8162"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.950719 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" podUID="c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" containerName="controller-manager" containerID="cri-o://e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939" gracePeriod=30 Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.951038 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.959585 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerStarted","Data":"62b8f38668e903ff631ceaae28d2d5ba9163b8b2d0fff39415bae48e89ca2545"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.964778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"227776fa5f6ebd91b55a005a4789693f07c81ccef3f25c3220b810063f24bf8b"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.966900 4835 patch_prober.go:28] interesting pod/route-controller-manager-5d8b8d4467-tnfsw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": EOF" start-of-body= Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.966952 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" podUID="547b3a6f-9728-4868-9cf7-f2ca233748b8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": EOF" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.967554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b456e9b8-ec76-4995-a08d-1577772dc71d","Type":"ContainerStarted","Data":"c2fe07f0988da5afe04035b2a535acb528edd9b823a58ca50103211df51a0106"} Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.967601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b456e9b8-ec76-4995-a08d-1577772dc71d","Type":"ContainerStarted","Data":"7290b31de26aed9fbb3cbff22ac1372b54c78c55aebd32f92cafe27c4d6cac86"} Jan 29 15:31:52 crc kubenswrapper[4835]: E0129 15:31:52.971165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5xfqg" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.973091 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.976705 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.976558827 podStartE2EDuration="12.976558827s" podCreationTimestamp="2026-01-29 15:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:52.957573213 +0000 UTC m=+215.150616867" watchObservedRunningTime="2026-01-29 15:31:52.976558827 +0000 UTC m=+215.169602482" Jan 29 15:31:52 crc kubenswrapper[4835]: I0129 15:31:52.999553 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" podStartSLOduration=39.99953022 podStartE2EDuration="39.99953022s" podCreationTimestamp="2026-01-29 15:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:52.979860088 +0000 UTC m=+215.172903742" watchObservedRunningTime="2026-01-29 15:31:52.99953022 +0000 UTC m=+215.192573874" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.001598 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" podStartSLOduration=40.00159167 podStartE2EDuration="40.00159167s" podCreationTimestamp="2026-01-29 15:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:53.00076081 +0000 UTC m=+215.193804494" watchObservedRunningTime="2026-01-29 15:31:53.00159167 +0000 UTC m=+215.194635334" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.057932 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.057908788 podStartE2EDuration="9.057908788s" podCreationTimestamp="2026-01-29 15:31:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:53.051156513 +0000 UTC m=+215.244200167" watchObservedRunningTime="2026-01-29 15:31:53.057908788 +0000 UTC m=+215.250952442" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.643289 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.660120 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.682732 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz"] Jan 29 15:31:53 crc kubenswrapper[4835]: E0129 15:31:53.683025 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" containerName="controller-manager" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.683044 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" containerName="controller-manager" Jan 29 15:31:53 crc kubenswrapper[4835]: E0129 15:31:53.683057 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547b3a6f-9728-4868-9cf7-f2ca233748b8" containerName="route-controller-manager" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.683065 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="547b3a6f-9728-4868-9cf7-f2ca233748b8" containerName="route-controller-manager" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.683198 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="547b3a6f-9728-4868-9cf7-f2ca233748b8" containerName="route-controller-manager" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.683215 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" containerName="controller-manager" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.683762 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685213 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-serving-cert\") pod \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685250 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-proxy-ca-bundles\") pod \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685276 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dth9v\" (UniqueName: \"kubernetes.io/projected/547b3a6f-9728-4868-9cf7-f2ca233748b8-kube-api-access-dth9v\") pod \"547b3a6f-9728-4868-9cf7-f2ca233748b8\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685306 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx7w8\" (UniqueName: \"kubernetes.io/projected/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-kube-api-access-mx7w8\") pod \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685343 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-client-ca\") pod \"547b3a6f-9728-4868-9cf7-f2ca233748b8\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685358 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547b3a6f-9728-4868-9cf7-f2ca233748b8-serving-cert\") pod \"547b3a6f-9728-4868-9cf7-f2ca233748b8\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685377 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-client-ca\") pod \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685391 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-config\") pod \"547b3a6f-9728-4868-9cf7-f2ca233748b8\" (UID: \"547b3a6f-9728-4868-9cf7-f2ca233748b8\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.685424 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-config\") pod \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\" (UID: \"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e\") " Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.686497 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-config" (OuterVolumeSpecName: "config") pod "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" (UID: "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.687030 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" (UID: "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.692843 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" (UID: "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.695297 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "547b3a6f-9728-4868-9cf7-f2ca233748b8" (UID: "547b3a6f-9728-4868-9cf7-f2ca233748b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.697207 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz"] Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.697933 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-client-ca" (OuterVolumeSpecName: "client-ca") pod "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" (UID: "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.698199 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-config" (OuterVolumeSpecName: "config") pod "547b3a6f-9728-4868-9cf7-f2ca233748b8" (UID: "547b3a6f-9728-4868-9cf7-f2ca233748b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.698328 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547b3a6f-9728-4868-9cf7-f2ca233748b8-kube-api-access-dth9v" (OuterVolumeSpecName: "kube-api-access-dth9v") pod "547b3a6f-9728-4868-9cf7-f2ca233748b8" (UID: "547b3a6f-9728-4868-9cf7-f2ca233748b8"). InnerVolumeSpecName "kube-api-access-dth9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.699339 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547b3a6f-9728-4868-9cf7-f2ca233748b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "547b3a6f-9728-4868-9cf7-f2ca233748b8" (UID: "547b3a6f-9728-4868-9cf7-f2ca233748b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.704597 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-kube-api-access-mx7w8" (OuterVolumeSpecName: "kube-api-access-mx7w8") pod "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" (UID: "c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e"). InnerVolumeSpecName "kube-api-access-mx7w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.787882 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-proxy-ca-bundles\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788420 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-config\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788495 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9de912-bbb6-4196-b8d8-cf4f8287db92-serving-cert\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788528 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-client-ca\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788550 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg8v6\" (UniqueName: \"kubernetes.io/projected/7c9de912-bbb6-4196-b8d8-cf4f8287db92-kube-api-access-xg8v6\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788671 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788692 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788702 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788712 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788721 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788732 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dth9v\" (UniqueName: \"kubernetes.io/projected/547b3a6f-9728-4868-9cf7-f2ca233748b8-kube-api-access-dth9v\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788744 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx7w8\" (UniqueName: \"kubernetes.io/projected/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e-kube-api-access-mx7w8\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788753 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/547b3a6f-9728-4868-9cf7-f2ca233748b8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.788763 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/547b3a6f-9728-4868-9cf7-f2ca233748b8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.889578 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9de912-bbb6-4196-b8d8-cf4f8287db92-serving-cert\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.889636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-client-ca\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.889657 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg8v6\" (UniqueName: \"kubernetes.io/projected/7c9de912-bbb6-4196-b8d8-cf4f8287db92-kube-api-access-xg8v6\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.889702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-proxy-ca-bundles\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.889725 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-config\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.890895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-client-ca\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.891123 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-proxy-ca-bundles\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.891131 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-config\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.895119 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9de912-bbb6-4196-b8d8-cf4f8287db92-serving-cert\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.915881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg8v6\" (UniqueName: \"kubernetes.io/projected/7c9de912-bbb6-4196-b8d8-cf4f8287db92-kube-api-access-xg8v6\") pod \"controller-manager-5cd94b8bdd-wk8cz\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.978816 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.978866 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" event={"ID":"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e","Type":"ContainerDied","Data":"e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939"} Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.978950 4835 scope.go:117] "RemoveContainer" containerID="e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939" Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.978717 4835 generic.go:334] "Generic (PLEG): container finished" podID="c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" containerID="e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939" exitCode=0 Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.979098 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-695fdf4d89-sz8zt" event={"ID":"c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e","Type":"ContainerDied","Data":"8681a6c8b1439cf4f046a63003c309d8b1826d2f174c333521e0f6754cff8162"} Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.985040 4835 generic.go:334] "Generic (PLEG): container finished" podID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerID="62b8f38668e903ff631ceaae28d2d5ba9163b8b2d0fff39415bae48e89ca2545" exitCode=0 Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.985144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerDied","Data":"62b8f38668e903ff631ceaae28d2d5ba9163b8b2d0fff39415bae48e89ca2545"} Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.985430 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerStarted","Data":"341913919d658006eb4d11a3c0a2199504a45400dcb5aee17535d411d3396607"} Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.989090 4835 generic.go:334] "Generic (PLEG): container finished" podID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerID="5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7" exitCode=0 Jan 29 15:31:53 crc kubenswrapper[4835]: I0129 15:31:53.989146 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2v7j" event={"ID":"b6af2394-aac4-4b19-a6f0-81065d23fbb9","Type":"ContainerDied","Data":"5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7"} Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:53.999569 4835 scope.go:117] "RemoveContainer" containerID="e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939" Jan 29 15:31:54 crc kubenswrapper[4835]: E0129 15:31:54.000669 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939\": container with ID starting with e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939 not found: ID does not exist" containerID="e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.000709 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939"} err="failed to get container status \"e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939\": rpc error: code = NotFound desc = could not find container \"e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939\": container with ID starting with e5cfc1d4a905fdfca7191a5acb7e7832bf62f78003f624bfa0dbc7b0a8ac6939 not found: ID does not exist" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.001185 4835 generic.go:334] "Generic (PLEG): container finished" podID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerID="d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6" exitCode=0 Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.001269 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c9gq" event={"ID":"c397420c-528f-4c7a-ad5b-45767f0b7af1","Type":"ContainerDied","Data":"d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6"} Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.007204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.010453 4835 generic.go:334] "Generic (PLEG): container finished" podID="e195bf84-aee9-4f63-b408-52f08c729e35" containerID="34d7c3c9b0fea67aa3e4a9b647e176a60fd06d62fe982f4259f65f143f63039f" exitCode=0 Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.010543 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e195bf84-aee9-4f63-b408-52f08c729e35","Type":"ContainerDied","Data":"34d7c3c9b0fea67aa3e4a9b647e176a60fd06d62fe982f4259f65f143f63039f"} Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.012655 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbmzj" podStartSLOduration=1.945207864 podStartE2EDuration="52.012634158s" podCreationTimestamp="2026-01-29 15:31:02 +0000 UTC" firstStartedPulling="2026-01-29 15:31:03.481447937 +0000 UTC m=+165.674491591" lastFinishedPulling="2026-01-29 15:31:53.548874241 +0000 UTC m=+215.741917885" observedRunningTime="2026-01-29 15:31:54.006127239 +0000 UTC m=+216.199170923" watchObservedRunningTime="2026-01-29 15:31:54.012634158 +0000 UTC m=+216.205677822" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.018112 4835 generic.go:334] "Generic (PLEG): container finished" podID="547b3a6f-9728-4868-9cf7-f2ca233748b8" containerID="b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530" exitCode=0 Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.018203 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.018261 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" event={"ID":"547b3a6f-9728-4868-9cf7-f2ca233748b8","Type":"ContainerDied","Data":"b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530"} Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.018306 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw" event={"ID":"547b3a6f-9728-4868-9cf7-f2ca233748b8","Type":"ContainerDied","Data":"8e351d78a931f941bea29da6e923fbfcc33acae151436770b354065e1433d322"} Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.018335 4835 scope.go:117] "RemoveContainer" containerID="b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.069913 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-695fdf4d89-sz8zt"] Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.073799 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-695fdf4d89-sz8zt"] Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.096377 4835 scope.go:117] "RemoveContainer" containerID="b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530" Jan 29 15:31:54 crc kubenswrapper[4835]: E0129 15:31:54.097282 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530\": container with ID starting with b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530 not found: ID does not exist" containerID="b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.097337 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530"} err="failed to get container status \"b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530\": rpc error: code = NotFound desc = could not find container \"b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530\": container with ID starting with b5e7db9bd142889ffccb0b43725e1c327c7a3cb59ed00f96d92a0540b831a530 not found: ID does not exist" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.106297 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw"] Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.110646 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d8b8d4467-tnfsw"] Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.444844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz"] Jan 29 15:31:54 crc kubenswrapper[4835]: W0129 15:31:54.456712 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c9de912_bbb6_4196_b8d8_cf4f8287db92.slice/crio-445f1e6046bb34b05740bf45cdfd80dc1dae17c0958f83705110c2735ece8048 WatchSource:0}: Error finding container 445f1e6046bb34b05740bf45cdfd80dc1dae17c0958f83705110c2735ece8048: Status 404 returned error can't find the container with id 445f1e6046bb34b05740bf45cdfd80dc1dae17c0958f83705110c2735ece8048 Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.499731 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="547b3a6f-9728-4868-9cf7-f2ca233748b8" path="/var/lib/kubelet/pods/547b3a6f-9728-4868-9cf7-f2ca233748b8/volumes" Jan 29 15:31:54 crc kubenswrapper[4835]: I0129 15:31:54.500298 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e" path="/var/lib/kubelet/pods/c14c6976-e7a8-4f2f-bb4d-7e7f9181a48e/volumes" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.026116 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" event={"ID":"7c9de912-bbb6-4196-b8d8-cf4f8287db92","Type":"ContainerStarted","Data":"807fef428319c5d96b2d8eb02b022f2390d3b1516292edab4c6464433866c72e"} Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.026175 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" event={"ID":"7c9de912-bbb6-4196-b8d8-cf4f8287db92","Type":"ContainerStarted","Data":"445f1e6046bb34b05740bf45cdfd80dc1dae17c0958f83705110c2735ece8048"} Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.027662 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.029994 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2v7j" event={"ID":"b6af2394-aac4-4b19-a6f0-81065d23fbb9","Type":"ContainerStarted","Data":"9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc"} Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.032180 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c9gq" event={"ID":"c397420c-528f-4c7a-ad5b-45767f0b7af1","Type":"ContainerStarted","Data":"77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962"} Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.047192 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.052831 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" podStartSLOduration=22.052804279 podStartE2EDuration="22.052804279s" podCreationTimestamp="2026-01-29 15:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:55.051031105 +0000 UTC m=+217.244074759" watchObservedRunningTime="2026-01-29 15:31:55.052804279 +0000 UTC m=+217.245847943" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.082916 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2v7j" podStartSLOduration=2.691155586 podStartE2EDuration="56.082892455s" podCreationTimestamp="2026-01-29 15:30:59 +0000 UTC" firstStartedPulling="2026-01-29 15:31:01.144670445 +0000 UTC m=+163.337714099" lastFinishedPulling="2026-01-29 15:31:54.536407314 +0000 UTC m=+216.729450968" observedRunningTime="2026-01-29 15:31:55.08229685 +0000 UTC m=+217.275340504" watchObservedRunningTime="2026-01-29 15:31:55.082892455 +0000 UTC m=+217.275936109" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.123647 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7c9gq" podStartSLOduration=3.465587133 podStartE2EDuration="57.123616501s" podCreationTimestamp="2026-01-29 15:30:58 +0000 UTC" firstStartedPulling="2026-01-29 15:31:01.158754286 +0000 UTC m=+163.351797940" lastFinishedPulling="2026-01-29 15:31:54.816783654 +0000 UTC m=+217.009827308" observedRunningTime="2026-01-29 15:31:55.119209844 +0000 UTC m=+217.312253498" watchObservedRunningTime="2026-01-29 15:31:55.123616501 +0000 UTC m=+217.316660165" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.375897 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.440838 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e195bf84-aee9-4f63-b408-52f08c729e35-kube-api-access\") pod \"e195bf84-aee9-4f63-b408-52f08c729e35\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.440939 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e195bf84-aee9-4f63-b408-52f08c729e35-kubelet-dir\") pod \"e195bf84-aee9-4f63-b408-52f08c729e35\" (UID: \"e195bf84-aee9-4f63-b408-52f08c729e35\") " Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.441095 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e195bf84-aee9-4f63-b408-52f08c729e35-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e195bf84-aee9-4f63-b408-52f08c729e35" (UID: "e195bf84-aee9-4f63-b408-52f08c729e35"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.441313 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e195bf84-aee9-4f63-b408-52f08c729e35-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.448620 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e195bf84-aee9-4f63-b408-52f08c729e35-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e195bf84-aee9-4f63-b408-52f08c729e35" (UID: "e195bf84-aee9-4f63-b408-52f08c729e35"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:31:55 crc kubenswrapper[4835]: I0129 15:31:55.542391 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e195bf84-aee9-4f63-b408-52f08c729e35-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.045793 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.047131 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e195bf84-aee9-4f63-b408-52f08c729e35","Type":"ContainerDied","Data":"bd8d43fe0f0940b4a59a194f1e82f99191d09575fdc3db447d46cca2dc2d3a0a"} Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.047446 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd8d43fe0f0940b4a59a194f1e82f99191d09575fdc3db447d46cca2dc2d3a0a" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.333702 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7"] Jan 29 15:31:56 crc kubenswrapper[4835]: E0129 15:31:56.333979 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e195bf84-aee9-4f63-b408-52f08c729e35" containerName="pruner" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.333993 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e195bf84-aee9-4f63-b408-52f08c729e35" containerName="pruner" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.334114 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e195bf84-aee9-4f63-b408-52f08c729e35" containerName="pruner" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.334619 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.338557 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.338599 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.338627 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.338691 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.340399 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.345622 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.347600 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7"] Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.453631 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-client-ca\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.453728 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8k5\" (UniqueName: \"kubernetes.io/projected/97578ece-a35a-4fd7-9516-700a0da1621b-kube-api-access-hx8k5\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.453770 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-config\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.453825 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97578ece-a35a-4fd7-9516-700a0da1621b-serving-cert\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.554829 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-config\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.555019 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97578ece-a35a-4fd7-9516-700a0da1621b-serving-cert\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.555094 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-client-ca\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.555181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8k5\" (UniqueName: \"kubernetes.io/projected/97578ece-a35a-4fd7-9516-700a0da1621b-kube-api-access-hx8k5\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.556809 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-config\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.557848 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-client-ca\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.575398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97578ece-a35a-4fd7-9516-700a0da1621b-serving-cert\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.575484 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8k5\" (UniqueName: \"kubernetes.io/projected/97578ece-a35a-4fd7-9516-700a0da1621b-kube-api-access-hx8k5\") pod \"route-controller-manager-6b5cb865f5-tc6d7\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.701567 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:56 crc kubenswrapper[4835]: I0129 15:31:56.962342 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7"] Jan 29 15:31:56 crc kubenswrapper[4835]: W0129 15:31:56.969038 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97578ece_a35a_4fd7_9516_700a0da1621b.slice/crio-62620f60aec1608235b44518c237ebe02a46fb077765f861560cfc7b460792dc WatchSource:0}: Error finding container 62620f60aec1608235b44518c237ebe02a46fb077765f861560cfc7b460792dc: Status 404 returned error can't find the container with id 62620f60aec1608235b44518c237ebe02a46fb077765f861560cfc7b460792dc Jan 29 15:31:57 crc kubenswrapper[4835]: I0129 15:31:57.054844 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" event={"ID":"97578ece-a35a-4fd7-9516-700a0da1621b","Type":"ContainerStarted","Data":"62620f60aec1608235b44518c237ebe02a46fb077765f861560cfc7b460792dc"} Jan 29 15:31:58 crc kubenswrapper[4835]: I0129 15:31:58.064622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" event={"ID":"97578ece-a35a-4fd7-9516-700a0da1621b","Type":"ContainerStarted","Data":"9e89f9d8ecdbf26a9dd7856053cc6c69935c58c946c822b379612900d08fed6f"} Jan 29 15:31:58 crc kubenswrapper[4835]: I0129 15:31:58.065731 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:58 crc kubenswrapper[4835]: I0129 15:31:58.073751 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:31:58 crc kubenswrapper[4835]: I0129 15:31:58.088443 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" podStartSLOduration=25.088425243 podStartE2EDuration="25.088425243s" podCreationTimestamp="2026-01-29 15:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:31:58.085884381 +0000 UTC m=+220.278928035" watchObservedRunningTime="2026-01-29 15:31:58.088425243 +0000 UTC m=+220.281468897" Jan 29 15:31:59 crc kubenswrapper[4835]: I0129 15:31:59.086540 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:31:59 crc kubenswrapper[4835]: I0129 15:31:59.086896 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:31:59 crc kubenswrapper[4835]: I0129 15:31:59.529878 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:31:59 crc kubenswrapper[4835]: I0129 15:31:59.529948 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:31:59 crc kubenswrapper[4835]: I0129 15:31:59.591949 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:31:59 crc kubenswrapper[4835]: I0129 15:31:59.595126 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:32:00 crc kubenswrapper[4835]: I0129 15:32:00.128914 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:32:00 crc kubenswrapper[4835]: I0129 15:32:00.131515 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:32:01 crc kubenswrapper[4835]: I0129 15:32:01.083283 4835 generic.go:334] "Generic (PLEG): container finished" podID="ea72779c-6414-41e3-9b36-9d64cf847442" containerID="59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b" exitCode=0 Jan 29 15:32:01 crc kubenswrapper[4835]: I0129 15:32:01.083397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsxd5" event={"ID":"ea72779c-6414-41e3-9b36-9d64cf847442","Type":"ContainerDied","Data":"59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b"} Jan 29 15:32:01 crc kubenswrapper[4835]: I0129 15:32:01.200194 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2v7j"] Jan 29 15:32:02 crc kubenswrapper[4835]: I0129 15:32:02.092656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsxd5" event={"ID":"ea72779c-6414-41e3-9b36-9d64cf847442","Type":"ContainerStarted","Data":"87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90"} Jan 29 15:32:02 crc kubenswrapper[4835]: I0129 15:32:02.092882 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2v7j" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="registry-server" containerID="cri-o://9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc" gracePeriod=2 Jan 29 15:32:02 crc kubenswrapper[4835]: I0129 15:32:02.118191 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wsxd5" podStartSLOduration=3.019930422 podStartE2EDuration="1m2.118173361s" podCreationTimestamp="2026-01-29 15:31:00 +0000 UTC" firstStartedPulling="2026-01-29 15:31:02.40067866 +0000 UTC m=+164.593722314" lastFinishedPulling="2026-01-29 15:32:01.498921599 +0000 UTC m=+223.691965253" observedRunningTime="2026-01-29 15:32:02.116257794 +0000 UTC m=+224.309301448" watchObservedRunningTime="2026-01-29 15:32:02.118173361 +0000 UTC m=+224.311217015" Jan 29 15:32:02 crc kubenswrapper[4835]: I0129 15:32:02.499358 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:32:02 crc kubenswrapper[4835]: I0129 15:32:02.499409 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:32:02 crc kubenswrapper[4835]: I0129 15:32:02.543508 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.026204 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.109012 4835 generic.go:334] "Generic (PLEG): container finished" podID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerID="f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4" exitCode=0 Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.109130 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerDied","Data":"f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4"} Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.117078 4835 generic.go:334] "Generic (PLEG): container finished" podID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerID="9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc" exitCode=0 Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.117688 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2v7j" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.117718 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2v7j" event={"ID":"b6af2394-aac4-4b19-a6f0-81065d23fbb9","Type":"ContainerDied","Data":"9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc"} Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.117782 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2v7j" event={"ID":"b6af2394-aac4-4b19-a6f0-81065d23fbb9","Type":"ContainerDied","Data":"bf5d5b8e3f4a740281e49f8d8e35f643c9cd09cdf7870e0a4d08dd52ef8f0f5a"} Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.117806 4835 scope.go:117] "RemoveContainer" containerID="9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.143887 4835 scope.go:117] "RemoveContainer" containerID="5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.156414 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-utilities\") pod \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.156513 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5zxf\" (UniqueName: \"kubernetes.io/projected/b6af2394-aac4-4b19-a6f0-81065d23fbb9-kube-api-access-n5zxf\") pod \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.156625 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-catalog-content\") pod \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\" (UID: \"b6af2394-aac4-4b19-a6f0-81065d23fbb9\") " Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.157225 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-utilities" (OuterVolumeSpecName: "utilities") pod "b6af2394-aac4-4b19-a6f0-81065d23fbb9" (UID: "b6af2394-aac4-4b19-a6f0-81065d23fbb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.163921 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6af2394-aac4-4b19-a6f0-81065d23fbb9-kube-api-access-n5zxf" (OuterVolumeSpecName: "kube-api-access-n5zxf") pod "b6af2394-aac4-4b19-a6f0-81065d23fbb9" (UID: "b6af2394-aac4-4b19-a6f0-81065d23fbb9"). InnerVolumeSpecName "kube-api-access-n5zxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.164944 4835 scope.go:117] "RemoveContainer" containerID="4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.175829 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.214505 4835 scope.go:117] "RemoveContainer" containerID="9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc" Jan 29 15:32:03 crc kubenswrapper[4835]: E0129 15:32:03.215315 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc\": container with ID starting with 9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc not found: ID does not exist" containerID="9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.215372 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc"} err="failed to get container status \"9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc\": rpc error: code = NotFound desc = could not find container \"9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc\": container with ID starting with 9aa6c7692726d095662070b5c9ef3ac4d216527055613458a43be220d50c3ecc not found: ID does not exist" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.215404 4835 scope.go:117] "RemoveContainer" containerID="5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7" Jan 29 15:32:03 crc kubenswrapper[4835]: E0129 15:32:03.215755 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7\": container with ID starting with 5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7 not found: ID does not exist" containerID="5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.215782 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7"} err="failed to get container status \"5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7\": rpc error: code = NotFound desc = could not find container \"5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7\": container with ID starting with 5d6eb216632a14742e44a2387ab539fc2fd6b74d7eda733f44b22fa8c1bf6eb7 not found: ID does not exist" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.215799 4835 scope.go:117] "RemoveContainer" containerID="4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8" Jan 29 15:32:03 crc kubenswrapper[4835]: E0129 15:32:03.216211 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8\": container with ID starting with 4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8 not found: ID does not exist" containerID="4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.216251 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8"} err="failed to get container status \"4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8\": rpc error: code = NotFound desc = could not find container \"4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8\": container with ID starting with 4be2b9548927bfbdc0aafcc000c9f4850d1b23d7cb2b8c714ef117a8512b1bc8 not found: ID does not exist" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.233170 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6af2394-aac4-4b19-a6f0-81065d23fbb9" (UID: "b6af2394-aac4-4b19-a6f0-81065d23fbb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.258503 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.258545 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6af2394-aac4-4b19-a6f0-81065d23fbb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.258584 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5zxf\" (UniqueName: \"kubernetes.io/projected/b6af2394-aac4-4b19-a6f0-81065d23fbb9-kube-api-access-n5zxf\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.442754 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2v7j"] Jan 29 15:32:03 crc kubenswrapper[4835]: I0129 15:32:03.446263 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2v7j"] Jan 29 15:32:04 crc kubenswrapper[4835]: I0129 15:32:04.503392 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" path="/var/lib/kubelet/pods/b6af2394-aac4-4b19-a6f0-81065d23fbb9/volumes" Jan 29 15:32:05 crc kubenswrapper[4835]: I0129 15:32:05.403536 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbmzj"] Jan 29 15:32:05 crc kubenswrapper[4835]: I0129 15:32:05.403756 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbmzj" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="registry-server" containerID="cri-o://341913919d658006eb4d11a3c0a2199504a45400dcb5aee17535d411d3396607" gracePeriod=2 Jan 29 15:32:08 crc kubenswrapper[4835]: I0129 15:32:08.152596 4835 generic.go:334] "Generic (PLEG): container finished" podID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerID="341913919d658006eb4d11a3c0a2199504a45400dcb5aee17535d411d3396607" exitCode=0 Jan 29 15:32:08 crc kubenswrapper[4835]: I0129 15:32:08.152695 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerDied","Data":"341913919d658006eb4d11a3c0a2199504a45400dcb5aee17535d411d3396607"} Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.096634 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.171736 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmzj" event={"ID":"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1","Type":"ContainerDied","Data":"0be2db396319c9e6c7ef78f80664b88da86a053ec7d83e7c096b41646fefcd93"} Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.171809 4835 scope.go:117] "RemoveContainer" containerID="341913919d658006eb4d11a3c0a2199504a45400dcb5aee17535d411d3396607" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.172046 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmzj" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.230993 4835 scope.go:117] "RemoveContainer" containerID="62b8f38668e903ff631ceaae28d2d5ba9163b8b2d0fff39415bae48e89ca2545" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.260291 4835 scope.go:117] "RemoveContainer" containerID="d982d6e0c2faee411b1a41903a0595b90fd555aee8a48bc9aaa8882b5b9a1724" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.262586 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnc42\" (UniqueName: \"kubernetes.io/projected/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-kube-api-access-qnc42\") pod \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.262680 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-catalog-content\") pod \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.262779 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-utilities\") pod \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\" (UID: \"9418ad1e-e0ae-4f2d-a8d6-58826e1367d1\") " Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.263717 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-utilities" (OuterVolumeSpecName: "utilities") pod "9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" (UID: "9418ad1e-e0ae-4f2d-a8d6-58826e1367d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.271338 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-kube-api-access-qnc42" (OuterVolumeSpecName: "kube-api-access-qnc42") pod "9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" (UID: "9418ad1e-e0ae-4f2d-a8d6-58826e1367d1"). InnerVolumeSpecName "kube-api-access-qnc42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.365499 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnc42\" (UniqueName: \"kubernetes.io/projected/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-kube-api-access-qnc42\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.365922 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.432344 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" (UID: "9418ad1e-e0ae-4f2d-a8d6-58826e1367d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.466926 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.516257 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbmzj"] Jan 29 15:32:10 crc kubenswrapper[4835]: I0129 15:32:10.532635 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbmzj"] Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.105941 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.106363 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.154112 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.182948 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerStarted","Data":"ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7"} Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.184996 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerID="feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f" exitCode=0 Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.185105 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfhvt" event={"ID":"3c3d86e8-b4be-4d4f-b844-516289b1b7ec","Type":"ContainerDied","Data":"feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f"} Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.195821 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerStarted","Data":"33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122"} Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.198556 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdrhn" event={"ID":"c9177be1-3dc4-4827-9ef5-e05afbf025bd","Type":"ContainerDied","Data":"d039228f9290ef53fd4e52240e754da429748ef081792f59ecec15b80072adaf"} Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.198456 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerID="d039228f9290ef53fd4e52240e754da429748ef081792f59ecec15b80072adaf" exitCode=0 Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.254153 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.255257 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hsm9c" podStartSLOduration=2.477747626 podStartE2EDuration="1m10.255226332s" podCreationTimestamp="2026-01-29 15:31:01 +0000 UTC" firstStartedPulling="2026-01-29 15:31:02.36457722 +0000 UTC m=+164.557620874" lastFinishedPulling="2026-01-29 15:32:10.142055926 +0000 UTC m=+232.335099580" observedRunningTime="2026-01-29 15:32:11.253378207 +0000 UTC m=+233.446421871" watchObservedRunningTime="2026-01-29 15:32:11.255226332 +0000 UTC m=+233.448269986" Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.520129 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:32:11 crc kubenswrapper[4835]: I0129 15:32:11.520190 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.208451 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdrhn" event={"ID":"c9177be1-3dc4-4827-9ef5-e05afbf025bd","Type":"ContainerStarted","Data":"caebc792e680b6fa574c841c42204c4ee5bca12783dfe5cd1dc05cb954f7fe68"} Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.210728 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerID="ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7" exitCode=0 Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.210765 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerDied","Data":"ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7"} Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.215422 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfhvt" event={"ID":"3c3d86e8-b4be-4d4f-b844-516289b1b7ec","Type":"ContainerStarted","Data":"77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac"} Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.258861 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kfhvt" podStartSLOduration=2.7043501389999998 podStartE2EDuration="1m14.258838478s" podCreationTimestamp="2026-01-29 15:30:58 +0000 UTC" firstStartedPulling="2026-01-29 15:31:00.091801802 +0000 UTC m=+162.284845456" lastFinishedPulling="2026-01-29 15:32:11.646290141 +0000 UTC m=+233.839333795" observedRunningTime="2026-01-29 15:32:12.257506416 +0000 UTC m=+234.450550070" watchObservedRunningTime="2026-01-29 15:32:12.258838478 +0000 UTC m=+234.451882132" Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.260184 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jdrhn" podStartSLOduration=3.833614616 podStartE2EDuration="1m14.260178801s" podCreationTimestamp="2026-01-29 15:30:58 +0000 UTC" firstStartedPulling="2026-01-29 15:31:01.24237044 +0000 UTC m=+163.435414094" lastFinishedPulling="2026-01-29 15:32:11.668934625 +0000 UTC m=+233.861978279" observedRunningTime="2026-01-29 15:32:12.231455668 +0000 UTC m=+234.424499322" watchObservedRunningTime="2026-01-29 15:32:12.260178801 +0000 UTC m=+234.453222445" Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.499632 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" path="/var/lib/kubelet/pods/9418ad1e-e0ae-4f2d-a8d6-58826e1367d1/volumes" Jan 29 15:32:12 crc kubenswrapper[4835]: I0129 15:32:12.564716 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hsm9c" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="registry-server" probeResult="failure" output=< Jan 29 15:32:12 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:32:12 crc kubenswrapper[4835]: > Jan 29 15:32:13 crc kubenswrapper[4835]: I0129 15:32:13.109728 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz"] Jan 29 15:32:13 crc kubenswrapper[4835]: I0129 15:32:13.110371 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" podUID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" containerName="controller-manager" containerID="cri-o://807fef428319c5d96b2d8eb02b022f2390d3b1516292edab4c6464433866c72e" gracePeriod=30 Jan 29 15:32:13 crc kubenswrapper[4835]: I0129 15:32:13.212097 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7"] Jan 29 15:32:13 crc kubenswrapper[4835]: I0129 15:32:13.212702 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" podUID="97578ece-a35a-4fd7-9516-700a0da1621b" containerName="route-controller-manager" containerID="cri-o://9e89f9d8ecdbf26a9dd7856053cc6c69935c58c946c822b379612900d08fed6f" gracePeriod=30 Jan 29 15:32:13 crc kubenswrapper[4835]: I0129 15:32:13.229554 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerStarted","Data":"c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11"} Jan 29 15:32:13 crc kubenswrapper[4835]: I0129 15:32:13.260528 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5xfqg" podStartSLOduration=2.922376425 podStartE2EDuration="1m12.260508757s" podCreationTimestamp="2026-01-29 15:31:01 +0000 UTC" firstStartedPulling="2026-01-29 15:31:03.526050358 +0000 UTC m=+165.719094012" lastFinishedPulling="2026-01-29 15:32:12.86418269 +0000 UTC m=+235.057226344" observedRunningTime="2026-01-29 15:32:13.258201251 +0000 UTC m=+235.451244905" watchObservedRunningTime="2026-01-29 15:32:13.260508757 +0000 UTC m=+235.453552411" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.237104 4835 generic.go:334] "Generic (PLEG): container finished" podID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" containerID="807fef428319c5d96b2d8eb02b022f2390d3b1516292edab4c6464433866c72e" exitCode=0 Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.237187 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" event={"ID":"7c9de912-bbb6-4196-b8d8-cf4f8287db92","Type":"ContainerDied","Data":"807fef428319c5d96b2d8eb02b022f2390d3b1516292edab4c6464433866c72e"} Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.239747 4835 generic.go:334] "Generic (PLEG): container finished" podID="97578ece-a35a-4fd7-9516-700a0da1621b" containerID="9e89f9d8ecdbf26a9dd7856053cc6c69935c58c946c822b379612900d08fed6f" exitCode=0 Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.239800 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" event={"ID":"97578ece-a35a-4fd7-9516-700a0da1621b","Type":"ContainerDied","Data":"9e89f9d8ecdbf26a9dd7856053cc6c69935c58c946c822b379612900d08fed6f"} Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.295749 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.299348 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327568 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm"] Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327821 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="extract-utilities" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327837 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="extract-utilities" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327850 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="extract-utilities" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327856 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="extract-utilities" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327863 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" containerName="controller-manager" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327870 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" containerName="controller-manager" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327877 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="registry-server" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327883 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="registry-server" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327894 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="extract-content" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327900 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="extract-content" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327910 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="extract-content" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327916 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="extract-content" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327924 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="registry-server" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327931 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="registry-server" Jan 29 15:32:14 crc kubenswrapper[4835]: E0129 15:32:14.327942 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97578ece-a35a-4fd7-9516-700a0da1621b" containerName="route-controller-manager" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.327948 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="97578ece-a35a-4fd7-9516-700a0da1621b" containerName="route-controller-manager" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.328033 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" containerName="controller-manager" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.328044 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6af2394-aac4-4b19-a6f0-81065d23fbb9" containerName="registry-server" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.328053 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9418ad1e-e0ae-4f2d-a8d6-58826e1367d1" containerName="registry-server" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.328062 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="97578ece-a35a-4fd7-9516-700a0da1621b" containerName="route-controller-manager" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.328589 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.362879 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm"] Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433343 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-config\") pod \"97578ece-a35a-4fd7-9516-700a0da1621b\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433414 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-proxy-ca-bundles\") pod \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433438 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg8v6\" (UniqueName: \"kubernetes.io/projected/7c9de912-bbb6-4196-b8d8-cf4f8287db92-kube-api-access-xg8v6\") pod \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433506 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97578ece-a35a-4fd7-9516-700a0da1621b-serving-cert\") pod \"97578ece-a35a-4fd7-9516-700a0da1621b\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433548 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-client-ca\") pod \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-config\") pod \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433590 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9de912-bbb6-4196-b8d8-cf4f8287db92-serving-cert\") pod \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\" (UID: \"7c9de912-bbb6-4196-b8d8-cf4f8287db92\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433650 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx8k5\" (UniqueName: \"kubernetes.io/projected/97578ece-a35a-4fd7-9516-700a0da1621b-kube-api-access-hx8k5\") pod \"97578ece-a35a-4fd7-9516-700a0da1621b\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.433675 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-client-ca\") pod \"97578ece-a35a-4fd7-9516-700a0da1621b\" (UID: \"97578ece-a35a-4fd7-9516-700a0da1621b\") " Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434567 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-client-ca" (OuterVolumeSpecName: "client-ca") pod "97578ece-a35a-4fd7-9516-700a0da1621b" (UID: "97578ece-a35a-4fd7-9516-700a0da1621b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434605 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c9de912-bbb6-4196-b8d8-cf4f8287db92" (UID: "7c9de912-bbb6-4196-b8d8-cf4f8287db92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvcx6\" (UniqueName: \"kubernetes.io/projected/de357655-3542-47bc-b940-695460f40cf0-kube-api-access-nvcx6\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434753 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7c9de912-bbb6-4196-b8d8-cf4f8287db92" (UID: "7c9de912-bbb6-4196-b8d8-cf4f8287db92"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434804 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-config" (OuterVolumeSpecName: "config") pod "7c9de912-bbb6-4196-b8d8-cf4f8287db92" (UID: "7c9de912-bbb6-4196-b8d8-cf4f8287db92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434919 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-config" (OuterVolumeSpecName: "config") pod "97578ece-a35a-4fd7-9516-700a0da1621b" (UID: "97578ece-a35a-4fd7-9516-700a0da1621b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434948 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-config\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.434999 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-client-ca\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.435016 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de357655-3542-47bc-b940-695460f40cf0-serving-cert\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.435173 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.435190 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97578ece-a35a-4fd7-9516-700a0da1621b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.435249 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.435278 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.435292 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c9de912-bbb6-4196-b8d8-cf4f8287db92-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.443299 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97578ece-a35a-4fd7-9516-700a0da1621b-kube-api-access-hx8k5" (OuterVolumeSpecName: "kube-api-access-hx8k5") pod "97578ece-a35a-4fd7-9516-700a0da1621b" (UID: "97578ece-a35a-4fd7-9516-700a0da1621b"). InnerVolumeSpecName "kube-api-access-hx8k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.445737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c9de912-bbb6-4196-b8d8-cf4f8287db92-kube-api-access-xg8v6" (OuterVolumeSpecName: "kube-api-access-xg8v6") pod "7c9de912-bbb6-4196-b8d8-cf4f8287db92" (UID: "7c9de912-bbb6-4196-b8d8-cf4f8287db92"). InnerVolumeSpecName "kube-api-access-xg8v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.445881 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97578ece-a35a-4fd7-9516-700a0da1621b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "97578ece-a35a-4fd7-9516-700a0da1621b" (UID: "97578ece-a35a-4fd7-9516-700a0da1621b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.450799 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c9de912-bbb6-4196-b8d8-cf4f8287db92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c9de912-bbb6-4196-b8d8-cf4f8287db92" (UID: "7c9de912-bbb6-4196-b8d8-cf4f8287db92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536303 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-config\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536356 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-client-ca\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536373 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de357655-3542-47bc-b940-695460f40cf0-serving-cert\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536438 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvcx6\" (UniqueName: \"kubernetes.io/projected/de357655-3542-47bc-b940-695460f40cf0-kube-api-access-nvcx6\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536503 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97578ece-a35a-4fd7-9516-700a0da1621b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536520 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c9de912-bbb6-4196-b8d8-cf4f8287db92-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536532 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx8k5\" (UniqueName: \"kubernetes.io/projected/97578ece-a35a-4fd7-9516-700a0da1621b-kube-api-access-hx8k5\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.536644 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg8v6\" (UniqueName: \"kubernetes.io/projected/7c9de912-bbb6-4196-b8d8-cf4f8287db92-kube-api-access-xg8v6\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.537526 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-client-ca\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.537813 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-config\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.541541 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de357655-3542-47bc-b940-695460f40cf0-serving-cert\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.561453 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvcx6\" (UniqueName: \"kubernetes.io/projected/de357655-3542-47bc-b940-695460f40cf0-kube-api-access-nvcx6\") pod \"route-controller-manager-6bffb87749-24xsm\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.644442 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:14 crc kubenswrapper[4835]: I0129 15:32:14.847667 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm"] Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.008819 4835 patch_prober.go:28] interesting pod/controller-manager-5cd94b8bdd-wk8cz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: i/o timeout" start-of-body= Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.008923 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" podUID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: i/o timeout" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.246972 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" event={"ID":"de357655-3542-47bc-b940-695460f40cf0","Type":"ContainerStarted","Data":"3a1626ad1f5365df70bd54a9fe17b25bcd4a6a8c291479ea1f9aaa6b5750397b"} Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.247076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" event={"ID":"de357655-3542-47bc-b940-695460f40cf0","Type":"ContainerStarted","Data":"abff4dfc5204704406c0c9d03c4dfbc9580f4f937888d6fe2f9c9cf41eb9bd9f"} Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.247116 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.250583 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" event={"ID":"7c9de912-bbb6-4196-b8d8-cf4f8287db92","Type":"ContainerDied","Data":"445f1e6046bb34b05740bf45cdfd80dc1dae17c0958f83705110c2735ece8048"} Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.250644 4835 scope.go:117] "RemoveContainer" containerID="807fef428319c5d96b2d8eb02b022f2390d3b1516292edab4c6464433866c72e" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.250656 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.253020 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" event={"ID":"97578ece-a35a-4fd7-9516-700a0da1621b","Type":"ContainerDied","Data":"62620f60aec1608235b44518c237ebe02a46fb077765f861560cfc7b460792dc"} Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.253167 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.281326 4835 scope.go:117] "RemoveContainer" containerID="9e89f9d8ecdbf26a9dd7856053cc6c69935c58c946c822b379612900d08fed6f" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.293870 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" podStartSLOduration=2.2938514469999998 podStartE2EDuration="2.293851447s" podCreationTimestamp="2026-01-29 15:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:32:15.280980973 +0000 UTC m=+237.474024627" watchObservedRunningTime="2026-01-29 15:32:15.293851447 +0000 UTC m=+237.486895101" Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.295752 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz"] Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.301634 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cd94b8bdd-wk8cz"] Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.315665 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7"] Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.318760 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5cb865f5-tc6d7"] Jan 29 15:32:15 crc kubenswrapper[4835]: I0129 15:32:15.329908 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.364129 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-776867bf5d-rww9q"] Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.365558 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.369198 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.369422 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.370155 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.372311 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.372690 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.373225 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.375219 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776867bf5d-rww9q"] Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.375396 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.467044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-client-ca\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.467099 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-proxy-ca-bundles\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.467136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99t4c\" (UniqueName: \"kubernetes.io/projected/72c87103-4a0f-451f-9679-0c5256c5399f-kube-api-access-99t4c\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.467192 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-config\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.467442 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c87103-4a0f-451f-9679-0c5256c5399f-serving-cert\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.499609 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c9de912-bbb6-4196-b8d8-cf4f8287db92" path="/var/lib/kubelet/pods/7c9de912-bbb6-4196-b8d8-cf4f8287db92/volumes" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.500252 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97578ece-a35a-4fd7-9516-700a0da1621b" path="/var/lib/kubelet/pods/97578ece-a35a-4fd7-9516-700a0da1621b/volumes" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.569185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-client-ca\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.569233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-proxy-ca-bundles\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.569270 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99t4c\" (UniqueName: \"kubernetes.io/projected/72c87103-4a0f-451f-9679-0c5256c5399f-kube-api-access-99t4c\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.569310 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-config\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.569372 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c87103-4a0f-451f-9679-0c5256c5399f-serving-cert\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.570942 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-client-ca\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.571077 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-proxy-ca-bundles\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.571655 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-config\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.575163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c87103-4a0f-451f-9679-0c5256c5399f-serving-cert\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.593272 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99t4c\" (UniqueName: \"kubernetes.io/projected/72c87103-4a0f-451f-9679-0c5256c5399f-kube-api-access-99t4c\") pod \"controller-manager-776867bf5d-rww9q\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:16 crc kubenswrapper[4835]: I0129 15:32:16.695802 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:17 crc kubenswrapper[4835]: I0129 15:32:17.134532 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-776867bf5d-rww9q"] Jan 29 15:32:17 crc kubenswrapper[4835]: I0129 15:32:17.270700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" event={"ID":"72c87103-4a0f-451f-9679-0c5256c5399f","Type":"ContainerStarted","Data":"feb2ceaf04417d7b84ec535a90d7fa6431e15465c7071741c1575365d17e6838"} Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.277532 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" event={"ID":"72c87103-4a0f-451f-9679-0c5256c5399f","Type":"ContainerStarted","Data":"bc87d81c5929101202d0f16b353268ce03582c9625991e060fe5493404a86077"} Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.277934 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.282978 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.305749 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" podStartSLOduration=5.30571498 podStartE2EDuration="5.30571498s" podCreationTimestamp="2026-01-29 15:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:32:18.302366768 +0000 UTC m=+240.495410432" watchObservedRunningTime="2026-01-29 15:32:18.30571498 +0000 UTC m=+240.498758674" Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.940333 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.940908 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:32:18 crc kubenswrapper[4835]: I0129 15:32:18.990682 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:32:19 crc kubenswrapper[4835]: I0129 15:32:19.302252 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:32:19 crc kubenswrapper[4835]: I0129 15:32:19.302319 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:32:19 crc kubenswrapper[4835]: I0129 15:32:19.778672 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:32:19 crc kubenswrapper[4835]: I0129 15:32:19.784416 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:32:20 crc kubenswrapper[4835]: I0129 15:32:20.333673 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:32:21 crc kubenswrapper[4835]: I0129 15:32:21.566765 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:32:21 crc kubenswrapper[4835]: I0129 15:32:21.627268 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:32:21 crc kubenswrapper[4835]: I0129 15:32:21.796946 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdrhn"] Jan 29 15:32:22 crc kubenswrapper[4835]: I0129 15:32:22.198220 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:32:22 crc kubenswrapper[4835]: I0129 15:32:22.198300 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:32:22 crc kubenswrapper[4835]: I0129 15:32:22.256729 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:32:22 crc kubenswrapper[4835]: I0129 15:32:22.304961 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jdrhn" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="registry-server" containerID="cri-o://caebc792e680b6fa574c841c42204c4ee5bca12783dfe5cd1dc05cb954f7fe68" gracePeriod=2 Jan 29 15:32:22 crc kubenswrapper[4835]: I0129 15:32:22.359920 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:32:23 crc kubenswrapper[4835]: I0129 15:32:23.996377 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hsm9c"] Jan 29 15:32:23 crc kubenswrapper[4835]: I0129 15:32:23.996644 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hsm9c" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="registry-server" containerID="cri-o://33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122" gracePeriod=2 Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.324281 4835 generic.go:334] "Generic (PLEG): container finished" podID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerID="caebc792e680b6fa574c841c42204c4ee5bca12783dfe5cd1dc05cb954f7fe68" exitCode=0 Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.324829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdrhn" event={"ID":"c9177be1-3dc4-4827-9ef5-e05afbf025bd","Type":"ContainerDied","Data":"caebc792e680b6fa574c841c42204c4ee5bca12783dfe5cd1dc05cb954f7fe68"} Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.514418 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.692784 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66fhh\" (UniqueName: \"kubernetes.io/projected/c9177be1-3dc4-4827-9ef5-e05afbf025bd-kube-api-access-66fhh\") pod \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.692938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-catalog-content\") pod \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.692973 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-utilities\") pod \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\" (UID: \"c9177be1-3dc4-4827-9ef5-e05afbf025bd\") " Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.693853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-utilities" (OuterVolumeSpecName: "utilities") pod "c9177be1-3dc4-4827-9ef5-e05afbf025bd" (UID: "c9177be1-3dc4-4827-9ef5-e05afbf025bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.705807 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9177be1-3dc4-4827-9ef5-e05afbf025bd-kube-api-access-66fhh" (OuterVolumeSpecName: "kube-api-access-66fhh") pod "c9177be1-3dc4-4827-9ef5-e05afbf025bd" (UID: "c9177be1-3dc4-4827-9ef5-e05afbf025bd"). InnerVolumeSpecName "kube-api-access-66fhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.753442 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9177be1-3dc4-4827-9ef5-e05afbf025bd" (UID: "c9177be1-3dc4-4827-9ef5-e05afbf025bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.795084 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66fhh\" (UniqueName: \"kubernetes.io/projected/c9177be1-3dc4-4827-9ef5-e05afbf025bd-kube-api-access-66fhh\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.795136 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:24 crc kubenswrapper[4835]: I0129 15:32:24.795148 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9177be1-3dc4-4827-9ef5-e05afbf025bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.299117 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.339679 4835 generic.go:334] "Generic (PLEG): container finished" podID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerID="33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122" exitCode=0 Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.340155 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerDied","Data":"33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122"} Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.340189 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hsm9c" event={"ID":"cf3a1edc-20c5-4348-bdce-8bf4fc613888","Type":"ContainerDied","Data":"7ffd858be0c804c3baabe428392baf12b0f6d6d50f71758fdce8ab76a49c74df"} Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.340212 4835 scope.go:117] "RemoveContainer" containerID="33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.340370 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hsm9c" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.345419 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdrhn" event={"ID":"c9177be1-3dc4-4827-9ef5-e05afbf025bd","Type":"ContainerDied","Data":"21073effe40de7201d7884c8d273ce3127374a2244acd879a55b7f1e308d7a93"} Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.345577 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdrhn" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.361949 4835 scope.go:117] "RemoveContainer" containerID="f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.377145 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdrhn"] Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.379878 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdrhn"] Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.396411 4835 scope.go:117] "RemoveContainer" containerID="b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.402667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-utilities\") pod \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.402787 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snbgp\" (UniqueName: \"kubernetes.io/projected/cf3a1edc-20c5-4348-bdce-8bf4fc613888-kube-api-access-snbgp\") pod \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.402816 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-catalog-content\") pod \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\" (UID: \"cf3a1edc-20c5-4348-bdce-8bf4fc613888\") " Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.403534 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-utilities" (OuterVolumeSpecName: "utilities") pod "cf3a1edc-20c5-4348-bdce-8bf4fc613888" (UID: "cf3a1edc-20c5-4348-bdce-8bf4fc613888"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.408121 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3a1edc-20c5-4348-bdce-8bf4fc613888-kube-api-access-snbgp" (OuterVolumeSpecName: "kube-api-access-snbgp") pod "cf3a1edc-20c5-4348-bdce-8bf4fc613888" (UID: "cf3a1edc-20c5-4348-bdce-8bf4fc613888"). InnerVolumeSpecName "kube-api-access-snbgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.411950 4835 scope.go:117] "RemoveContainer" containerID="33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122" Jan 29 15:32:25 crc kubenswrapper[4835]: E0129 15:32:25.412406 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122\": container with ID starting with 33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122 not found: ID does not exist" containerID="33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.412456 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122"} err="failed to get container status \"33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122\": rpc error: code = NotFound desc = could not find container \"33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122\": container with ID starting with 33de74bb71c412e7f060e7c2de2bd52341835821ba9e6341c03942e446d5f122 not found: ID does not exist" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.412502 4835 scope.go:117] "RemoveContainer" containerID="f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4" Jan 29 15:32:25 crc kubenswrapper[4835]: E0129 15:32:25.412795 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4\": container with ID starting with f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4 not found: ID does not exist" containerID="f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.412829 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4"} err="failed to get container status \"f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4\": rpc error: code = NotFound desc = could not find container \"f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4\": container with ID starting with f03728cf1ec91d09f4e7057549430ba19cc38b34395a85b82a32b2e7c9a170d4 not found: ID does not exist" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.412855 4835 scope.go:117] "RemoveContainer" containerID="b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961" Jan 29 15:32:25 crc kubenswrapper[4835]: E0129 15:32:25.413140 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961\": container with ID starting with b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961 not found: ID does not exist" containerID="b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.413184 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961"} err="failed to get container status \"b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961\": rpc error: code = NotFound desc = could not find container \"b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961\": container with ID starting with b03dfdda2cd2b95e1e8feb3734195265b7ccef2ae4954ddb84f7b9072bfeb961 not found: ID does not exist" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.413204 4835 scope.go:117] "RemoveContainer" containerID="caebc792e680b6fa574c841c42204c4ee5bca12783dfe5cd1dc05cb954f7fe68" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.424042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf3a1edc-20c5-4348-bdce-8bf4fc613888" (UID: "cf3a1edc-20c5-4348-bdce-8bf4fc613888"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.445403 4835 scope.go:117] "RemoveContainer" containerID="d039228f9290ef53fd4e52240e754da429748ef081792f59ecec15b80072adaf" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.462122 4835 scope.go:117] "RemoveContainer" containerID="ef5884a2c4173eaffe1efd595beceab84293f0b91a01c1986a1f86e1b72a23e0" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.504822 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.504874 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snbgp\" (UniqueName: \"kubernetes.io/projected/cf3a1edc-20c5-4348-bdce-8bf4fc613888-kube-api-access-snbgp\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.504889 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf3a1edc-20c5-4348-bdce-8bf4fc613888-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.683293 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hsm9c"] Jan 29 15:32:25 crc kubenswrapper[4835]: I0129 15:32:25.686885 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hsm9c"] Jan 29 15:32:26 crc kubenswrapper[4835]: I0129 15:32:26.500954 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" path="/var/lib/kubelet/pods/c9177be1-3dc4-4827-9ef5-e05afbf025bd/volumes" Jan 29 15:32:26 crc kubenswrapper[4835]: I0129 15:32:26.501648 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" path="/var/lib/kubelet/pods/cf3a1edc-20c5-4348-bdce-8bf4fc613888/volumes" Jan 29 15:32:29 crc kubenswrapper[4835]: I0129 15:32:29.737757 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wrghl"] Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.411258 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.411650 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792" gracePeriod=15 Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.411637 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c" gracePeriod=15 Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.411655 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d" gracePeriod=15 Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.411932 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb" gracePeriod=15 Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.412518 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967" gracePeriod=15 Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.413141 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.414988 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.415141 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.415270 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="extract-content" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.415380 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="extract-content" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.415491 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="extract-content" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.415561 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="extract-content" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.415634 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="extract-utilities" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.415784 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="extract-utilities" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.415860 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.415922 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.415993 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.416087 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.416223 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="extract-utilities" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.416303 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="extract-utilities" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.416390 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.416481 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.416590 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.416663 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.416751 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.416833 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.416907 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.416969 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.417029 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="registry-server" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417087 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="registry-server" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.417140 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="registry-server" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417190 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="registry-server" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417596 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417683 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417740 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417803 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417857 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3a1edc-20c5-4348-bdce-8bf4fc613888" containerName="registry-server" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.417966 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9177be1-3dc4-4827-9ef5-e05afbf025bd" containerName="registry-server" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.418074 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.418152 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.420256 4835 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.421708 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.425666 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.496849 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.571978 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572102 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572124 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572188 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572339 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.572544 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.673918 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.673966 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674033 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674065 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674084 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674121 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674141 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674216 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674258 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674286 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674313 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674339 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674368 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674394 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.674421 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: I0129 15:32:30.797968 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:30 crc kubenswrapper[4835]: W0129 15:32:30.839024 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-56d6419d241f31a3f44cb034fb4e7fd6879f32a63d1252d78df57b8f76dd11df WatchSource:0}: Error finding container 56d6419d241f31a3f44cb034fb4e7fd6879f32a63d1252d78df57b8f76dd11df: Status 404 returned error can't find the container with id 56d6419d241f31a3f44cb034fb4e7fd6879f32a63d1252d78df57b8f76dd11df Jan 29 15:32:30 crc kubenswrapper[4835]: E0129 15:32:30.844233 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d74e4bcbe4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:30.842969679 +0000 UTC m=+253.036013373,LastTimestamp:2026-01-29 15:32:30.842969679 +0000 UTC m=+253.036013373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.386562 4835 generic.go:334] "Generic (PLEG): container finished" podID="b456e9b8-ec76-4995-a08d-1577772dc71d" containerID="c2fe07f0988da5afe04035b2a535acb528edd9b823a58ca50103211df51a0106" exitCode=0 Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.386675 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b456e9b8-ec76-4995-a08d-1577772dc71d","Type":"ContainerDied","Data":"c2fe07f0988da5afe04035b2a535acb528edd9b823a58ca50103211df51a0106"} Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.387639 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.390166 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc"} Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.390245 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"56d6419d241f31a3f44cb034fb4e7fd6879f32a63d1252d78df57b8f76dd11df"} Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.391599 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:31 crc kubenswrapper[4835]: E0129 15:32:31.391690 4835 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.393407 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.395259 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.395937 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d" exitCode=0 Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.395963 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792" exitCode=0 Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.395973 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967" exitCode=0 Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.395983 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c" exitCode=2 Jan 29 15:32:31 crc kubenswrapper[4835]: I0129 15:32:31.396032 4835 scope.go:117] "RemoveContainer" containerID="e4b9faddea5f94d40747ec9f6d3185d98ea666342dd84dc746881c3ea9b9dbf0" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.421051 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.788398 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.789364 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.789827 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.789993 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.807140 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.807657 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.807928 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.903816 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-var-lock\") pod \"b456e9b8-ec76-4995-a08d-1577772dc71d\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904009 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b456e9b8-ec76-4995-a08d-1577772dc71d-kube-api-access\") pod \"b456e9b8-ec76-4995-a08d-1577772dc71d\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904032 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-var-lock" (OuterVolumeSpecName: "var-lock") pod "b456e9b8-ec76-4995-a08d-1577772dc71d" (UID: "b456e9b8-ec76-4995-a08d-1577772dc71d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904068 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-kubelet-dir\") pod \"b456e9b8-ec76-4995-a08d-1577772dc71d\" (UID: \"b456e9b8-ec76-4995-a08d-1577772dc71d\") " Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904109 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904155 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904190 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904189 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b456e9b8-ec76-4995-a08d-1577772dc71d" (UID: "b456e9b8-ec76-4995-a08d-1577772dc71d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904246 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904264 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904406 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904524 4835 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904545 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904562 4835 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b456e9b8-ec76-4995-a08d-1577772dc71d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.904578 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:32 crc kubenswrapper[4835]: I0129 15:32:32.913516 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b456e9b8-ec76-4995-a08d-1577772dc71d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b456e9b8-ec76-4995-a08d-1577772dc71d" (UID: "b456e9b8-ec76-4995-a08d-1577772dc71d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.006147 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b456e9b8-ec76-4995-a08d-1577772dc71d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.006627 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.435053 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.436484 4835 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb" exitCode=0 Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.436599 4835 scope.go:117] "RemoveContainer" containerID="f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.436789 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.440736 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b456e9b8-ec76-4995-a08d-1577772dc71d","Type":"ContainerDied","Data":"7290b31de26aed9fbb3cbff22ac1372b54c78c55aebd32f92cafe27c4d6cac86"} Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.440903 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7290b31de26aed9fbb3cbff22ac1372b54c78c55aebd32f92cafe27c4d6cac86" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.440816 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.461705 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.462063 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.468178 4835 scope.go:117] "RemoveContainer" containerID="974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.473728 4835 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.474004 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.490396 4835 scope.go:117] "RemoveContainer" containerID="c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.516381 4835 scope.go:117] "RemoveContainer" containerID="5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.539078 4835 scope.go:117] "RemoveContainer" containerID="22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.566050 4835 scope.go:117] "RemoveContainer" containerID="51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.607341 4835 scope.go:117] "RemoveContainer" containerID="f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d" Jan 29 15:32:33 crc kubenswrapper[4835]: E0129 15:32:33.608262 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\": container with ID starting with f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d not found: ID does not exist" containerID="f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.608441 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d"} err="failed to get container status \"f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\": rpc error: code = NotFound desc = could not find container \"f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d\": container with ID starting with f36103d3ad026d117fcc9b90432f43cd91fd979b350a1af189e0e4eb328dec7d not found: ID does not exist" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.608620 4835 scope.go:117] "RemoveContainer" containerID="974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792" Jan 29 15:32:33 crc kubenswrapper[4835]: E0129 15:32:33.609418 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\": container with ID starting with 974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792 not found: ID does not exist" containerID="974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.609491 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792"} err="failed to get container status \"974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\": rpc error: code = NotFound desc = could not find container \"974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792\": container with ID starting with 974dba235ff567089025e21f48df28dea6be694e971f07f00281e61964f8d792 not found: ID does not exist" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.609536 4835 scope.go:117] "RemoveContainer" containerID="c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967" Jan 29 15:32:33 crc kubenswrapper[4835]: E0129 15:32:33.609870 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\": container with ID starting with c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967 not found: ID does not exist" containerID="c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.609906 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967"} err="failed to get container status \"c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\": rpc error: code = NotFound desc = could not find container \"c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967\": container with ID starting with c5c444f08c1af97f6de8e63f77daf0ae6ed20308c71acf3b037460805e1e1967 not found: ID does not exist" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.609927 4835 scope.go:117] "RemoveContainer" containerID="5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c" Jan 29 15:32:33 crc kubenswrapper[4835]: E0129 15:32:33.610220 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\": container with ID starting with 5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c not found: ID does not exist" containerID="5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.610254 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c"} err="failed to get container status \"5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\": rpc error: code = NotFound desc = could not find container \"5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c\": container with ID starting with 5dc0f871a5c0b53fff8ac05299cbb6f73e58eba0849ffff1e70fe4fe4e87101c not found: ID does not exist" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.610280 4835 scope.go:117] "RemoveContainer" containerID="22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb" Jan 29 15:32:33 crc kubenswrapper[4835]: E0129 15:32:33.612195 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\": container with ID starting with 22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb not found: ID does not exist" containerID="22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.612275 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb"} err="failed to get container status \"22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\": rpc error: code = NotFound desc = could not find container \"22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb\": container with ID starting with 22e83322e8fda9fa399ebcf2290d4efcf065f0eb51e553377df5bc403c3818fb not found: ID does not exist" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.612364 4835 scope.go:117] "RemoveContainer" containerID="51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b" Jan 29 15:32:33 crc kubenswrapper[4835]: E0129 15:32:33.612853 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\": container with ID starting with 51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b not found: ID does not exist" containerID="51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b" Jan 29 15:32:33 crc kubenswrapper[4835]: I0129 15:32:33.612893 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b"} err="failed to get container status \"51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\": rpc error: code = NotFound desc = could not find container \"51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b\": container with ID starting with 51f448b3ce630b88a2b76bbb7c17deccce4eb2304a3ecbdef420430b7b3c525b not found: ID does not exist" Jan 29 15:32:34 crc kubenswrapper[4835]: I0129 15:32:34.505304 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 15:32:38 crc kubenswrapper[4835]: I0129 15:32:38.496207 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.412610 4835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.144:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f3d74e4bcbe4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 15:32:30.842969679 +0000 UTC m=+253.036013373,LastTimestamp:2026-01-29 15:32:30.842969679 +0000 UTC m=+253.036013373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.680137 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.680814 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.681253 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.681693 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.682297 4835 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:39 crc kubenswrapper[4835]: I0129 15:32:39.682361 4835 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.682760 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="200ms" Jan 29 15:32:39 crc kubenswrapper[4835]: E0129 15:32:39.883600 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="400ms" Jan 29 15:32:40 crc kubenswrapper[4835]: E0129 15:32:40.284727 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="800ms" Jan 29 15:32:41 crc kubenswrapper[4835]: E0129 15:32:41.085424 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="1.6s" Jan 29 15:32:42 crc kubenswrapper[4835]: E0129 15:32:42.686625 4835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.144:6443: connect: connection refused" interval="3.2s" Jan 29 15:32:43 crc kubenswrapper[4835]: I0129 15:32:43.492005 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:43 crc kubenswrapper[4835]: I0129 15:32:43.493349 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:43 crc kubenswrapper[4835]: I0129 15:32:43.520934 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:43 crc kubenswrapper[4835]: I0129 15:32:43.520980 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:43 crc kubenswrapper[4835]: E0129 15:32:43.521646 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:43 crc kubenswrapper[4835]: I0129 15:32:43.522328 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:44 crc kubenswrapper[4835]: I0129 15:32:44.516882 4835 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="cc0f03916e4998c1e348243b99a5ec3ebbd0129a4f041057db19d66ea228b1b7" exitCode=0 Jan 29 15:32:44 crc kubenswrapper[4835]: I0129 15:32:44.516958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"cc0f03916e4998c1e348243b99a5ec3ebbd0129a4f041057db19d66ea228b1b7"} Jan 29 15:32:44 crc kubenswrapper[4835]: I0129 15:32:44.517007 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"741df6883008a018f0348b81ea449bfa0eb79b89a33105e5ba3309cea82f1dd2"} Jan 29 15:32:44 crc kubenswrapper[4835]: I0129 15:32:44.517410 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:44 crc kubenswrapper[4835]: I0129 15:32:44.517429 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:44 crc kubenswrapper[4835]: I0129 15:32:44.518215 4835 status_manager.go:851] "Failed to get status for pod" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" Jan 29 15:32:44 crc kubenswrapper[4835]: E0129 15:32:44.518247 4835 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.144:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.528922 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1c03e5cb95e2e9c47a4eaf701e43289b8f0edf64fd6e87ea0f875d3293fb247"} Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.529244 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6efca3b32118cff7b67e4d46cb4acf1ac4052876db0eaa8472e73d2ce8d19347"} Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.529257 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c094e239495b718864cd290f2b30bf6d46c5537aee9444667c4ab7dfb98b8a88"} Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.529273 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"11c4b91603f694f6fdbf54cf4affa4ed7f8f12a834774a6b3c072de6bd34e1ff"} Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.531303 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.531344 4835 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c" exitCode=1 Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.531364 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c"} Jan 29 15:32:45 crc kubenswrapper[4835]: I0129 15:32:45.531818 4835 scope.go:117] "RemoveContainer" containerID="70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c" Jan 29 15:32:46 crc kubenswrapper[4835]: I0129 15:32:46.539626 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2b0d303b07e9bd26a448f64a8cbb52f9bedff093f31dc2ac4c4a2422e30cc221"} Jan 29 15:32:46 crc kubenswrapper[4835]: I0129 15:32:46.540036 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:46 crc kubenswrapper[4835]: I0129 15:32:46.539930 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:46 crc kubenswrapper[4835]: I0129 15:32:46.540069 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:46 crc kubenswrapper[4835]: I0129 15:32:46.542743 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:32:46 crc kubenswrapper[4835]: I0129 15:32:46.542780 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab4b0691cd20b735e3322001cfa7d90918c4a86f76ed3bd4ab3f1f61d51041a7"} Jan 29 15:32:47 crc kubenswrapper[4835]: I0129 15:32:47.593362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:47 crc kubenswrapper[4835]: I0129 15:32:47.593684 4835 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 15:32:47 crc kubenswrapper[4835]: I0129 15:32:47.594223 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 15:32:48 crc kubenswrapper[4835]: I0129 15:32:48.522543 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:48 crc kubenswrapper[4835]: I0129 15:32:48.522610 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:48 crc kubenswrapper[4835]: I0129 15:32:48.530048 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:50 crc kubenswrapper[4835]: I0129 15:32:50.217427 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:32:51 crc kubenswrapper[4835]: I0129 15:32:51.553369 4835 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:51 crc kubenswrapper[4835]: I0129 15:32:51.577638 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:51 crc kubenswrapper[4835]: I0129 15:32:51.577681 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:51 crc kubenswrapper[4835]: I0129 15:32:51.583965 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:32:51 crc kubenswrapper[4835]: I0129 15:32:51.588228 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="10b6589b-5118-45e5-949b-d43ada0aca5e" Jan 29 15:32:52 crc kubenswrapper[4835]: I0129 15:32:52.588496 4835 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:52 crc kubenswrapper[4835]: I0129 15:32:52.588538 4835 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e865a0c1-3c6b-42b8-bf83-fd54eaffd425" Jan 29 15:32:54 crc kubenswrapper[4835]: I0129 15:32:54.769631 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerName="oauth-openshift" containerID="cri-o://0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc" gracePeriod=15 Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.202868 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.374787 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-error\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.375317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-serving-cert\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.375668 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-session\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.375948 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-cliconfig\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.376208 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-trusted-ca-bundle\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.376503 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-ocp-branding-template\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.376588 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.376962 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-router-certs\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.377223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plv6v\" (UniqueName: \"kubernetes.io/projected/b51dbfee-ce3e-4c24-946c-f7481c5c286a-kube-api-access-plv6v\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.377379 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.377694 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-provider-selection\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.377938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-login\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.378249 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-dir\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.378331 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.378758 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-service-ca\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.379020 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-idp-0-file-data\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.379253 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-policies\") pod \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\" (UID: \"b51dbfee-ce3e-4c24-946c-f7481c5c286a\") " Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.379639 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.379934 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.380619 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.380664 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.380687 4835 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.380709 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.380727 4835 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b51dbfee-ce3e-4c24-946c-f7481c5c286a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.381812 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51dbfee-ce3e-4c24-946c-f7481c5c286a-kube-api-access-plv6v" (OuterVolumeSpecName: "kube-api-access-plv6v") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "kube-api-access-plv6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.381882 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.381933 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.382861 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.383348 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.383433 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.383701 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.383931 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.384219 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b51dbfee-ce3e-4c24-946c-f7481c5c286a" (UID: "b51dbfee-ce3e-4c24-946c-f7481c5c286a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481635 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481690 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481708 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481725 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481743 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481759 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plv6v\" (UniqueName: \"kubernetes.io/projected/b51dbfee-ce3e-4c24-946c-f7481c5c286a-kube-api-access-plv6v\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481775 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481791 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.481802 4835 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b51dbfee-ce3e-4c24-946c-f7481c5c286a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.608698 4835 generic.go:334] "Generic (PLEG): container finished" podID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerID="0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc" exitCode=0 Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.608750 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" event={"ID":"b51dbfee-ce3e-4c24-946c-f7481c5c286a","Type":"ContainerDied","Data":"0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc"} Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.608774 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.608785 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wrghl" event={"ID":"b51dbfee-ce3e-4c24-946c-f7481c5c286a","Type":"ContainerDied","Data":"d647ce725810a3ac0f9196e551d06aa882b3f200179caffc213462edb5043c74"} Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.608807 4835 scope.go:117] "RemoveContainer" containerID="0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.643227 4835 scope.go:117] "RemoveContainer" containerID="0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc" Jan 29 15:32:55 crc kubenswrapper[4835]: E0129 15:32:55.644200 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc\": container with ID starting with 0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc not found: ID does not exist" containerID="0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc" Jan 29 15:32:55 crc kubenswrapper[4835]: I0129 15:32:55.644346 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc"} err="failed to get container status \"0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc\": rpc error: code = NotFound desc = could not find container \"0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc\": container with ID starting with 0a1cddb15df793e9f2dca35c33e82b4aff0b3f120416a0c0ac753298d74d02dc not found: ID does not exist" Jan 29 15:32:57 crc kubenswrapper[4835]: I0129 15:32:57.593836 4835 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 15:32:57 crc kubenswrapper[4835]: I0129 15:32:57.594192 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 15:32:58 crc kubenswrapper[4835]: I0129 15:32:58.509974 4835 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="10b6589b-5118-45e5-949b-d43ada0aca5e" Jan 29 15:33:00 crc kubenswrapper[4835]: I0129 15:33:00.688547 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 15:33:01 crc kubenswrapper[4835]: I0129 15:33:01.028096 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 15:33:01 crc kubenswrapper[4835]: I0129 15:33:01.675458 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 15:33:01 crc kubenswrapper[4835]: I0129 15:33:01.838170 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.020485 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.305879 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.316712 4835 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.329236 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.377648 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.474449 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.536971 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.764267 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 15:33:03 crc kubenswrapper[4835]: I0129 15:33:03.998069 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.234944 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.248974 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.309832 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.480229 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.612975 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.623574 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.633031 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.641890 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.749261 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.813666 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 15:33:04 crc kubenswrapper[4835]: I0129 15:33:04.869859 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.000183 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.082212 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.154683 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.175893 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.263833 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.289368 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.342863 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.354197 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.454018 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.496502 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.513299 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.538077 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.566129 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.647843 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.834181 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.836525 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.866644 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.913605 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 15:33:05 crc kubenswrapper[4835]: I0129 15:33:05.942870 4835 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.032335 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.161641 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.271798 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.361381 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.622403 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.631378 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.633665 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.786411 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.883963 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.885058 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.975330 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.977298 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 15:33:06 crc kubenswrapper[4835]: I0129 15:33:06.998382 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.055945 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.092158 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.124681 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.130564 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.134425 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.175736 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.176331 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.185496 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.195730 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.228525 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.238003 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.455723 4835 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.475377 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.498995 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.538277 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.594095 4835 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.594146 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.594202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.594786 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"ab4b0691cd20b735e3322001cfa7d90918c4a86f76ed3bd4ab3f1f61d51041a7"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.594918 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://ab4b0691cd20b735e3322001cfa7d90918c4a86f76ed3bd4ab3f1f61d51041a7" gracePeriod=30 Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.691660 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.744642 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 15:33:07 crc kubenswrapper[4835]: I0129 15:33:07.921517 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.056769 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.240590 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.324082 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.347611 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.360410 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.410758 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.480582 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.494692 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.622408 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.758491 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.777271 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.907312 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.942014 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.964968 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.990683 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 15:33:08 crc kubenswrapper[4835]: I0129 15:33:08.998036 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.046839 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.188008 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.249837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.390626 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.425946 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.473947 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.529167 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.563832 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.575750 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.592886 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.640853 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.655640 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.664566 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.704683 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.725010 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.741757 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.759302 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.882217 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.929336 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.940915 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.961308 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:33:09 crc kubenswrapper[4835]: I0129 15:33:09.967918 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.002100 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.259617 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.265118 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.322517 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.442228 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.460946 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.462192 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.464382 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.479962 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.568869 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.747719 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.815044 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 15:33:10 crc kubenswrapper[4835]: I0129 15:33:10.971010 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.037410 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.121055 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.165112 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.194259 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.501091 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.589943 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.618646 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.680951 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.689397 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.725710 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.845650 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 15:33:11 crc kubenswrapper[4835]: I0129 15:33:11.934005 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.050671 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.097565 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.148138 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.180501 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.263416 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.268772 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.319124 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.336868 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.434609 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.485539 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.525923 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.597285 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.615636 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.619045 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.629176 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.685854 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.700939 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.768517 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.857925 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.858054 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.859828 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 15:33:12 crc kubenswrapper[4835]: I0129 15:33:12.932030 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.085694 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.135410 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.173780 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.293059 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.293150 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.294818 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.328558 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.341350 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.342988 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.376217 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.463679 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.465311 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.515673 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.516593 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.586855 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.668711 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.779525 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.803189 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.845033 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.846774 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.912222 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.934338 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.960286 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 15:33:13 crc kubenswrapper[4835]: I0129 15:33:13.981431 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.017224 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.030647 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.103321 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.120111 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.175513 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.304376 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.511249 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.590886 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.636449 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.637616 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.645868 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.647149 4835 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.710039 4835 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.711107 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.735913 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.797698 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.814622 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.834279 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.835866 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.876074 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.946481 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 15:33:14 crc kubenswrapper[4835]: I0129 15:33:14.980619 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.012614 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.027263 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.051909 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.103956 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.198624 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.237542 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.340303 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.429523 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.430092 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.493722 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.546718 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.699331 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.723792 4835 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.727838 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-wrghl"] Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.727890 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.731756 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.751547 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.751530091 podStartE2EDuration="24.751530091s" podCreationTimestamp="2026-01-29 15:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:15.748355201 +0000 UTC m=+297.941398855" watchObservedRunningTime="2026-01-29 15:33:15.751530091 +0000 UTC m=+297.944573745" Jan 29 15:33:15 crc kubenswrapper[4835]: I0129 15:33:15.976928 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.183856 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.254552 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.266524 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.346031 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.418948 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.434320 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.498851 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" path="/var/lib/kubelet/pods/b51dbfee-ce3e-4c24-946c-f7481c5c286a/volumes" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.554980 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-65984cfdd4-ms8wb"] Jan 29 15:33:16 crc kubenswrapper[4835]: E0129 15:33:16.555178 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" containerName="installer" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.555189 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" containerName="installer" Jan 29 15:33:16 crc kubenswrapper[4835]: E0129 15:33:16.555200 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerName="oauth-openshift" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.555206 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerName="oauth-openshift" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.555293 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b456e9b8-ec76-4995-a08d-1577772dc71d" containerName="installer" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.555305 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51dbfee-ce3e-4c24-946c-f7481c5c286a" containerName="oauth-openshift" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.555715 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.559900 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.559901 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560294 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.559920 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560053 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560061 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560539 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560200 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560201 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.560798 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.561123 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.563320 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.568160 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.573915 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65984cfdd4-ms8wb"] Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.578501 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.582451 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.644222 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.644541 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49394808-5fc7-4ef4-b943-3b0af36f0e8c-audit-dir\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.644701 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-login\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.644808 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.644936 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645044 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645149 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645248 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr2gx\" (UniqueName: \"kubernetes.io/projected/49394808-5fc7-4ef4-b943-3b0af36f0e8c-kube-api-access-wr2gx\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645475 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-audit-policies\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645580 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-error\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645809 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-session\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.645919 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746424 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-login\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746492 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746525 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746545 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746564 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr2gx\" (UniqueName: \"kubernetes.io/projected/49394808-5fc7-4ef4-b943-3b0af36f0e8c-kube-api-access-wr2gx\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746596 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746653 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-audit-policies\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-error\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746711 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-session\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746730 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746757 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49394808-5fc7-4ef4-b943-3b0af36f0e8c-audit-dir\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.746826 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49394808-5fc7-4ef4-b943-3b0af36f0e8c-audit-dir\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.749897 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.749966 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.751588 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.752859 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49394808-5fc7-4ef4-b943-3b0af36f0e8c-audit-policies\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.755144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-session\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.756174 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.757378 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-login\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.757571 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.761944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.762520 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.762425 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-error\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.764189 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49394808-5fc7-4ef4-b943-3b0af36f0e8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.768881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr2gx\" (UniqueName: \"kubernetes.io/projected/49394808-5fc7-4ef4-b943-3b0af36f0e8c-kube-api-access-wr2gx\") pod \"oauth-openshift-65984cfdd4-ms8wb\" (UID: \"49394808-5fc7-4ef4-b943-3b0af36f0e8c\") " pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.835537 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.860443 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.882039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.888326 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:16 crc kubenswrapper[4835]: I0129 15:33:16.939181 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.085622 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.121049 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.153393 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.232291 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.241903 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.278818 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-65984cfdd4-ms8wb"] Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.739044 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" event={"ID":"49394808-5fc7-4ef4-b943-3b0af36f0e8c","Type":"ContainerStarted","Data":"cc67224c02b2978c6d031a66bf20716cd4a37850b8266f0142ba2aee10881f4c"} Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.739096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" event={"ID":"49394808-5fc7-4ef4-b943-3b0af36f0e8c","Type":"ContainerStarted","Data":"b6e8272b4312f66fcac4cc49bdf59836551b06e13a7ee5e754072d3cb38124a0"} Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.739348 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.824137 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.866488 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" Jan 29 15:33:17 crc kubenswrapper[4835]: I0129 15:33:17.884289 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-65984cfdd4-ms8wb" podStartSLOduration=48.884272317 podStartE2EDuration="48.884272317s" podCreationTimestamp="2026-01-29 15:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:17.763773039 +0000 UTC m=+299.956816693" watchObservedRunningTime="2026-01-29 15:33:17.884272317 +0000 UTC m=+300.077315971" Jan 29 15:33:18 crc kubenswrapper[4835]: I0129 15:33:18.301614 4835 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 15:33:19 crc kubenswrapper[4835]: I0129 15:33:19.244716 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:33:19 crc kubenswrapper[4835]: I0129 15:33:19.394011 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 15:33:25 crc kubenswrapper[4835]: I0129 15:33:25.299815 4835 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 15:33:25 crc kubenswrapper[4835]: I0129 15:33:25.300592 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc" gracePeriod=5 Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.667143 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm"] Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.667628 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" podUID="de357655-3542-47bc-b940-695460f40cf0" containerName="route-controller-manager" containerID="cri-o://3a1626ad1f5365df70bd54a9fe17b25bcd4a6a8c291479ea1f9aaa6b5750397b" gracePeriod=30 Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.673682 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-776867bf5d-rww9q"] Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.673996 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" podUID="72c87103-4a0f-451f-9679-0c5256c5399f" containerName="controller-manager" containerID="cri-o://bc87d81c5929101202d0f16b353268ce03582c9625991e060fe5493404a86077" gracePeriod=30 Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.814325 4835 generic.go:334] "Generic (PLEG): container finished" podID="72c87103-4a0f-451f-9679-0c5256c5399f" containerID="bc87d81c5929101202d0f16b353268ce03582c9625991e060fe5493404a86077" exitCode=0 Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.814425 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" event={"ID":"72c87103-4a0f-451f-9679-0c5256c5399f","Type":"ContainerDied","Data":"bc87d81c5929101202d0f16b353268ce03582c9625991e060fe5493404a86077"} Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.815749 4835 generic.go:334] "Generic (PLEG): container finished" podID="de357655-3542-47bc-b940-695460f40cf0" containerID="3a1626ad1f5365df70bd54a9fe17b25bcd4a6a8c291479ea1f9aaa6b5750397b" exitCode=0 Jan 29 15:33:27 crc kubenswrapper[4835]: I0129 15:33:27.815781 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" event={"ID":"de357655-3542-47bc-b940-695460f40cf0","Type":"ContainerDied","Data":"3a1626ad1f5365df70bd54a9fe17b25bcd4a6a8c291479ea1f9aaa6b5750397b"} Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.177263 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.240239 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.301820 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-config\") pod \"de357655-3542-47bc-b940-695460f40cf0\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302656 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvcx6\" (UniqueName: \"kubernetes.io/projected/de357655-3542-47bc-b940-695460f40cf0-kube-api-access-nvcx6\") pod \"de357655-3542-47bc-b940-695460f40cf0\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302686 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-proxy-ca-bundles\") pod \"72c87103-4a0f-451f-9679-0c5256c5399f\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302713 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-client-ca\") pod \"de357655-3542-47bc-b940-695460f40cf0\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302737 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99t4c\" (UniqueName: \"kubernetes.io/projected/72c87103-4a0f-451f-9679-0c5256c5399f-kube-api-access-99t4c\") pod \"72c87103-4a0f-451f-9679-0c5256c5399f\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302761 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de357655-3542-47bc-b940-695460f40cf0-serving-cert\") pod \"de357655-3542-47bc-b940-695460f40cf0\" (UID: \"de357655-3542-47bc-b940-695460f40cf0\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302792 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-client-ca\") pod \"72c87103-4a0f-451f-9679-0c5256c5399f\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302851 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-config\") pod \"72c87103-4a0f-451f-9679-0c5256c5399f\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302882 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c87103-4a0f-451f-9679-0c5256c5399f-serving-cert\") pod \"72c87103-4a0f-451f-9679-0c5256c5399f\" (UID: \"72c87103-4a0f-451f-9679-0c5256c5399f\") " Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.302906 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-config" (OuterVolumeSpecName: "config") pod "de357655-3542-47bc-b940-695460f40cf0" (UID: "de357655-3542-47bc-b940-695460f40cf0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.303252 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.303679 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "72c87103-4a0f-451f-9679-0c5256c5399f" (UID: "72c87103-4a0f-451f-9679-0c5256c5399f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.303727 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-client-ca" (OuterVolumeSpecName: "client-ca") pod "72c87103-4a0f-451f-9679-0c5256c5399f" (UID: "72c87103-4a0f-451f-9679-0c5256c5399f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.304019 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-client-ca" (OuterVolumeSpecName: "client-ca") pod "de357655-3542-47bc-b940-695460f40cf0" (UID: "de357655-3542-47bc-b940-695460f40cf0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.304331 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-config" (OuterVolumeSpecName: "config") pod "72c87103-4a0f-451f-9679-0c5256c5399f" (UID: "72c87103-4a0f-451f-9679-0c5256c5399f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.309194 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72c87103-4a0f-451f-9679-0c5256c5399f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "72c87103-4a0f-451f-9679-0c5256c5399f" (UID: "72c87103-4a0f-451f-9679-0c5256c5399f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.309832 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de357655-3542-47bc-b940-695460f40cf0-kube-api-access-nvcx6" (OuterVolumeSpecName: "kube-api-access-nvcx6") pod "de357655-3542-47bc-b940-695460f40cf0" (UID: "de357655-3542-47bc-b940-695460f40cf0"). InnerVolumeSpecName "kube-api-access-nvcx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.310356 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de357655-3542-47bc-b940-695460f40cf0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de357655-3542-47bc-b940-695460f40cf0" (UID: "de357655-3542-47bc-b940-695460f40cf0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.310447 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72c87103-4a0f-451f-9679-0c5256c5399f-kube-api-access-99t4c" (OuterVolumeSpecName: "kube-api-access-99t4c") pod "72c87103-4a0f-451f-9679-0c5256c5399f" (UID: "72c87103-4a0f-451f-9679-0c5256c5399f"). InnerVolumeSpecName "kube-api-access-99t4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403717 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de357655-3542-47bc-b940-695460f40cf0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403764 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403776 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403787 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c87103-4a0f-451f-9679-0c5256c5399f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403800 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvcx6\" (UniqueName: \"kubernetes.io/projected/de357655-3542-47bc-b940-695460f40cf0-kube-api-access-nvcx6\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403814 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/72c87103-4a0f-451f-9679-0c5256c5399f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403826 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de357655-3542-47bc-b940-695460f40cf0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.403837 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99t4c\" (UniqueName: \"kubernetes.io/projected/72c87103-4a0f-451f-9679-0c5256c5399f-kube-api-access-99t4c\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.822957 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.823610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-776867bf5d-rww9q" event={"ID":"72c87103-4a0f-451f-9679-0c5256c5399f","Type":"ContainerDied","Data":"feb2ceaf04417d7b84ec535a90d7fa6431e15465c7071741c1575365d17e6838"} Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.823649 4835 scope.go:117] "RemoveContainer" containerID="bc87d81c5929101202d0f16b353268ce03582c9625991e060fe5493404a86077" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.827266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" event={"ID":"de357655-3542-47bc-b940-695460f40cf0","Type":"ContainerDied","Data":"abff4dfc5204704406c0c9d03c4dfbc9580f4f937888d6fe2f9c9cf41eb9bd9f"} Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.827314 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.847064 4835 scope.go:117] "RemoveContainer" containerID="3a1626ad1f5365df70bd54a9fe17b25bcd4a6a8c291479ea1f9aaa6b5750397b" Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.850111 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm"] Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.857037 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6bffb87749-24xsm"] Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.861426 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-776867bf5d-rww9q"] Jan 29 15:33:28 crc kubenswrapper[4835]: I0129 15:33:28.864995 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-776867bf5d-rww9q"] Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.415823 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-vfncv"] Jan 29 15:33:29 crc kubenswrapper[4835]: E0129 15:33:29.416099 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416113 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:29 crc kubenswrapper[4835]: E0129 15:33:29.416127 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72c87103-4a0f-451f-9679-0c5256c5399f" containerName="controller-manager" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416134 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="72c87103-4a0f-451f-9679-0c5256c5399f" containerName="controller-manager" Jan 29 15:33:29 crc kubenswrapper[4835]: E0129 15:33:29.416142 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de357655-3542-47bc-b940-695460f40cf0" containerName="route-controller-manager" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416147 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="de357655-3542-47bc-b940-695460f40cf0" containerName="route-controller-manager" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416248 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416265 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="de357655-3542-47bc-b940-695460f40cf0" containerName="route-controller-manager" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416278 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="72c87103-4a0f-451f-9679-0c5256c5399f" containerName="controller-manager" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.416739 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.419160 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.419439 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.419454 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.419532 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.419454 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz"] Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.420226 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.421956 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.422121 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.426772 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.426807 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.427149 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz"] Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.427248 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.427432 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.427597 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.428003 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.433525 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.438322 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-vfncv"] Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515049 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk8tk\" (UniqueName: \"kubernetes.io/projected/f30371f2-28b9-4f01-b26d-d04927a44679-kube-api-access-xk8tk\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515110 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30371f2-28b9-4f01-b26d-d04927a44679-serving-cert\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515139 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-proxy-ca-bundles\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515172 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96a099e-2226-4031-af7f-f912a3d47075-serving-cert\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-config\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515229 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-client-ca\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515284 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-config\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515327 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-schcr\" (UniqueName: \"kubernetes.io/projected/b96a099e-2226-4031-af7f-f912a3d47075-kube-api-access-schcr\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.515424 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-client-ca\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96a099e-2226-4031-af7f-f912a3d47075-serving-cert\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616700 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-config\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-client-ca\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616749 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-config\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616790 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-schcr\" (UniqueName: \"kubernetes.io/projected/b96a099e-2226-4031-af7f-f912a3d47075-kube-api-access-schcr\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616809 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-client-ca\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616878 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk8tk\" (UniqueName: \"kubernetes.io/projected/f30371f2-28b9-4f01-b26d-d04927a44679-kube-api-access-xk8tk\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616897 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30371f2-28b9-4f01-b26d-d04927a44679-serving-cert\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.616914 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-proxy-ca-bundles\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.618318 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-proxy-ca-bundles\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.619163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-client-ca\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.619581 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-config\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.620210 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-client-ca\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.621150 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-config\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.623650 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96a099e-2226-4031-af7f-f912a3d47075-serving-cert\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.624127 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30371f2-28b9-4f01-b26d-d04927a44679-serving-cert\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.645176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk8tk\" (UniqueName: \"kubernetes.io/projected/f30371f2-28b9-4f01-b26d-d04927a44679-kube-api-access-xk8tk\") pod \"controller-manager-79cc795f77-vfncv\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.645262 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-schcr\" (UniqueName: \"kubernetes.io/projected/b96a099e-2226-4031-af7f-f912a3d47075-kube-api-access-schcr\") pod \"route-controller-manager-6d75c4448c-qvwzz\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.736566 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.746706 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:29 crc kubenswrapper[4835]: I0129 15:33:29.948433 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz"] Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.197455 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-vfncv"] Jan 29 15:33:30 crc kubenswrapper[4835]: W0129 15:33:30.204586 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf30371f2_28b9_4f01_b26d_d04927a44679.slice/crio-2a8f57b32ef69cb5c48ee6ca3d461a2b3e096394b1ab5128f199c349f41497d7 WatchSource:0}: Error finding container 2a8f57b32ef69cb5c48ee6ca3d461a2b3e096394b1ab5128f199c349f41497d7: Status 404 returned error can't find the container with id 2a8f57b32ef69cb5c48ee6ca3d461a2b3e096394b1ab5128f199c349f41497d7 Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.393125 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.393199 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.498098 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72c87103-4a0f-451f-9679-0c5256c5399f" path="/var/lib/kubelet/pods/72c87103-4a0f-451f-9679-0c5256c5399f/volumes" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.499170 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de357655-3542-47bc-b940-695460f40cf0" path="/var/lib/kubelet/pods/de357655-3542-47bc-b940-695460f40cf0/volumes" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.526662 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.526722 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.526758 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.526791 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.526876 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.527185 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.527639 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.527703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.527724 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.534376 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.628810 4835 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.628846 4835 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.628859 4835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.628870 4835 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.628880 4835 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.865747 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" event={"ID":"b96a099e-2226-4031-af7f-f912a3d47075","Type":"ContainerStarted","Data":"30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e"} Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.865833 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.865854 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" event={"ID":"b96a099e-2226-4031-af7f-f912a3d47075","Type":"ContainerStarted","Data":"b9087e410ee05036abc83d25495d104a47ecdc1762451495859b08e182ddce94"} Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.867505 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.867545 4835 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc" exitCode=137 Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.867602 4835 scope.go:117] "RemoveContainer" containerID="0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.867677 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.869919 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" event={"ID":"f30371f2-28b9-4f01-b26d-d04927a44679","Type":"ContainerStarted","Data":"3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039"} Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.869951 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" event={"ID":"f30371f2-28b9-4f01-b26d-d04927a44679","Type":"ContainerStarted","Data":"2a8f57b32ef69cb5c48ee6ca3d461a2b3e096394b1ab5128f199c349f41497d7"} Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.870732 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.872928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.876728 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.902891 4835 scope.go:117] "RemoveContainer" containerID="0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc" Jan 29 15:33:30 crc kubenswrapper[4835]: E0129 15:33:30.903402 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc\": container with ID starting with 0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc not found: ID does not exist" containerID="0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.903550 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc"} err="failed to get container status \"0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc\": rpc error: code = NotFound desc = could not find container \"0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc\": container with ID starting with 0454f6bb9d77257444581ca15a68056dd9c6d614d7011c3d6c4ddabf54856cbc not found: ID does not exist" Jan 29 15:33:30 crc kubenswrapper[4835]: I0129 15:33:30.913445 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" podStartSLOduration=3.913424692 podStartE2EDuration="3.913424692s" podCreationTimestamp="2026-01-29 15:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:30.886304509 +0000 UTC m=+313.079348163" watchObservedRunningTime="2026-01-29 15:33:30.913424692 +0000 UTC m=+313.106468346" Jan 29 15:33:32 crc kubenswrapper[4835]: I0129 15:33:32.500677 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 15:33:33 crc kubenswrapper[4835]: I0129 15:33:33.073955 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" podStartSLOduration=6.073937237 podStartE2EDuration="6.073937237s" podCreationTimestamp="2026-01-29 15:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:30.936773571 +0000 UTC m=+313.129817225" watchObservedRunningTime="2026-01-29 15:33:33.073937237 +0000 UTC m=+315.266980891" Jan 29 15:33:33 crc kubenswrapper[4835]: I0129 15:33:33.076179 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-vfncv"] Jan 29 15:33:33 crc kubenswrapper[4835]: I0129 15:33:33.094704 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz"] Jan 29 15:33:33 crc kubenswrapper[4835]: I0129 15:33:33.887510 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" podUID="b96a099e-2226-4031-af7f-f912a3d47075" containerName="route-controller-manager" containerID="cri-o://30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e" gracePeriod=30 Jan 29 15:33:33 crc kubenswrapper[4835]: I0129 15:33:33.887535 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" podUID="f30371f2-28b9-4f01-b26d-d04927a44679" containerName="controller-manager" containerID="cri-o://3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039" gracePeriod=30 Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.361268 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.389138 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc"] Jan 29 15:33:34 crc kubenswrapper[4835]: E0129 15:33:34.389361 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96a099e-2226-4031-af7f-f912a3d47075" containerName="route-controller-manager" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.389375 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96a099e-2226-4031-af7f-f912a3d47075" containerName="route-controller-manager" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.389526 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96a099e-2226-4031-af7f-f912a3d47075" containerName="route-controller-manager" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.389894 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.408457 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc"] Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.426636 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.477814 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-client-ca\") pod \"b96a099e-2226-4031-af7f-f912a3d47075\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.477884 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-config\") pod \"b96a099e-2226-4031-af7f-f912a3d47075\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.477926 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96a099e-2226-4031-af7f-f912a3d47075-serving-cert\") pod \"b96a099e-2226-4031-af7f-f912a3d47075\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.478552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-client-ca" (OuterVolumeSpecName: "client-ca") pod "b96a099e-2226-4031-af7f-f912a3d47075" (UID: "b96a099e-2226-4031-af7f-f912a3d47075"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.478609 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-config" (OuterVolumeSpecName: "config") pod "b96a099e-2226-4031-af7f-f912a3d47075" (UID: "b96a099e-2226-4031-af7f-f912a3d47075"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.478935 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-schcr\" (UniqueName: \"kubernetes.io/projected/b96a099e-2226-4031-af7f-f912a3d47075-kube-api-access-schcr\") pod \"b96a099e-2226-4031-af7f-f912a3d47075\" (UID: \"b96a099e-2226-4031-af7f-f912a3d47075\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.479154 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.479174 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96a099e-2226-4031-af7f-f912a3d47075-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.483297 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96a099e-2226-4031-af7f-f912a3d47075-kube-api-access-schcr" (OuterVolumeSpecName: "kube-api-access-schcr") pod "b96a099e-2226-4031-af7f-f912a3d47075" (UID: "b96a099e-2226-4031-af7f-f912a3d47075"). InnerVolumeSpecName "kube-api-access-schcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.484552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96a099e-2226-4031-af7f-f912a3d47075-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b96a099e-2226-4031-af7f-f912a3d47075" (UID: "b96a099e-2226-4031-af7f-f912a3d47075"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579604 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-proxy-ca-bundles\") pod \"f30371f2-28b9-4f01-b26d-d04927a44679\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579679 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30371f2-28b9-4f01-b26d-d04927a44679-serving-cert\") pod \"f30371f2-28b9-4f01-b26d-d04927a44679\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579728 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-client-ca\") pod \"f30371f2-28b9-4f01-b26d-d04927a44679\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579776 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-config\") pod \"f30371f2-28b9-4f01-b26d-d04927a44679\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579828 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk8tk\" (UniqueName: \"kubernetes.io/projected/f30371f2-28b9-4f01-b26d-d04927a44679-kube-api-access-xk8tk\") pod \"f30371f2-28b9-4f01-b26d-d04927a44679\" (UID: \"f30371f2-28b9-4f01-b26d-d04927a44679\") " Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579954 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-config\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.579976 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-serving-cert\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.580002 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrpq\" (UniqueName: \"kubernetes.io/projected/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-kube-api-access-zqrpq\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.580054 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-client-ca\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.580117 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-schcr\" (UniqueName: \"kubernetes.io/projected/b96a099e-2226-4031-af7f-f912a3d47075-kube-api-access-schcr\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.580129 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96a099e-2226-4031-af7f-f912a3d47075-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.580814 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-client-ca" (OuterVolumeSpecName: "client-ca") pod "f30371f2-28b9-4f01-b26d-d04927a44679" (UID: "f30371f2-28b9-4f01-b26d-d04927a44679"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.580984 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-config" (OuterVolumeSpecName: "config") pod "f30371f2-28b9-4f01-b26d-d04927a44679" (UID: "f30371f2-28b9-4f01-b26d-d04927a44679"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.581049 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f30371f2-28b9-4f01-b26d-d04927a44679" (UID: "f30371f2-28b9-4f01-b26d-d04927a44679"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.584011 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f30371f2-28b9-4f01-b26d-d04927a44679-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f30371f2-28b9-4f01-b26d-d04927a44679" (UID: "f30371f2-28b9-4f01-b26d-d04927a44679"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.584291 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30371f2-28b9-4f01-b26d-d04927a44679-kube-api-access-xk8tk" (OuterVolumeSpecName: "kube-api-access-xk8tk") pod "f30371f2-28b9-4f01-b26d-d04927a44679" (UID: "f30371f2-28b9-4f01-b26d-d04927a44679"). InnerVolumeSpecName "kube-api-access-xk8tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681324 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-client-ca\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681494 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-config\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681522 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-serving-cert\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681553 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrpq\" (UniqueName: \"kubernetes.io/projected/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-kube-api-access-zqrpq\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681725 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681744 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681758 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk8tk\" (UniqueName: \"kubernetes.io/projected/f30371f2-28b9-4f01-b26d-d04927a44679-kube-api-access-xk8tk\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681770 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f30371f2-28b9-4f01-b26d-d04927a44679-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.681827 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f30371f2-28b9-4f01-b26d-d04927a44679-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.682778 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-client-ca\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.683176 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-config\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.687157 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-serving-cert\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.699197 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrpq\" (UniqueName: \"kubernetes.io/projected/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-kube-api-access-zqrpq\") pod \"route-controller-manager-79cb97567b-78blc\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.737592 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.893142 4835 generic.go:334] "Generic (PLEG): container finished" podID="b96a099e-2226-4031-af7f-f912a3d47075" containerID="30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e" exitCode=0 Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.893223 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.893224 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" event={"ID":"b96a099e-2226-4031-af7f-f912a3d47075","Type":"ContainerDied","Data":"30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e"} Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.893280 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz" event={"ID":"b96a099e-2226-4031-af7f-f912a3d47075","Type":"ContainerDied","Data":"b9087e410ee05036abc83d25495d104a47ecdc1762451495859b08e182ddce94"} Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.893301 4835 scope.go:117] "RemoveContainer" containerID="30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.897677 4835 generic.go:334] "Generic (PLEG): container finished" podID="f30371f2-28b9-4f01-b26d-d04927a44679" containerID="3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039" exitCode=0 Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.897715 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" event={"ID":"f30371f2-28b9-4f01-b26d-d04927a44679","Type":"ContainerDied","Data":"3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039"} Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.897745 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" event={"ID":"f30371f2-28b9-4f01-b26d-d04927a44679","Type":"ContainerDied","Data":"2a8f57b32ef69cb5c48ee6ca3d461a2b3e096394b1ab5128f199c349f41497d7"} Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.897779 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79cc795f77-vfncv" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.913668 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz"] Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.916993 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-qvwzz"] Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.922568 4835 scope.go:117] "RemoveContainer" containerID="30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e" Jan 29 15:33:34 crc kubenswrapper[4835]: E0129 15:33:34.923585 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e\": container with ID starting with 30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e not found: ID does not exist" containerID="30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.923625 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e"} err="failed to get container status \"30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e\": rpc error: code = NotFound desc = could not find container \"30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e\": container with ID starting with 30990da71f3cc0ddcc6c3b565e374bc1a0a3eb7e9759b071256322fc93a1406e not found: ID does not exist" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.923648 4835 scope.go:117] "RemoveContainer" containerID="3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.936248 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-vfncv"] Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.938604 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-vfncv"] Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.953067 4835 scope.go:117] "RemoveContainer" containerID="3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039" Jan 29 15:33:34 crc kubenswrapper[4835]: E0129 15:33:34.953867 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039\": container with ID starting with 3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039 not found: ID does not exist" containerID="3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039" Jan 29 15:33:34 crc kubenswrapper[4835]: I0129 15:33:34.953934 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039"} err="failed to get container status \"3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039\": rpc error: code = NotFound desc = could not find container \"3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039\": container with ID starting with 3bf0a02a77e9e9d8f6bbd62e2cde3e148e89c21f79328c33d6ee37b781f9f039 not found: ID does not exist" Jan 29 15:33:35 crc kubenswrapper[4835]: I0129 15:33:35.144497 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc"] Jan 29 15:33:35 crc kubenswrapper[4835]: I0129 15:33:35.907824 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" event={"ID":"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe","Type":"ContainerStarted","Data":"e2c9b756bacb5a52a83290ae404b6df691a777d972f908fde5a558b430fd52f8"} Jan 29 15:33:35 crc kubenswrapper[4835]: I0129 15:33:35.909169 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:35 crc kubenswrapper[4835]: I0129 15:33:35.909233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" event={"ID":"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe","Type":"ContainerStarted","Data":"4bfb5d37b5d891aed4a8a323180e261f9db4f22386b7214570d7127379ce9153"} Jan 29 15:33:35 crc kubenswrapper[4835]: I0129 15:33:35.914083 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:33:35 crc kubenswrapper[4835]: I0129 15:33:35.927406 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" podStartSLOduration=2.9273842009999997 podStartE2EDuration="2.927384201s" podCreationTimestamp="2026-01-29 15:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:35.923819131 +0000 UTC m=+318.116862785" watchObservedRunningTime="2026-01-29 15:33:35.927384201 +0000 UTC m=+318.120427845" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.419702 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-m2hcw"] Jan 29 15:33:36 crc kubenswrapper[4835]: E0129 15:33:36.419972 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30371f2-28b9-4f01-b26d-d04927a44679" containerName="controller-manager" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.419987 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30371f2-28b9-4f01-b26d-d04927a44679" containerName="controller-manager" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.420123 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30371f2-28b9-4f01-b26d-d04927a44679" containerName="controller-manager" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.420578 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.422736 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.424527 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.424789 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.425266 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.425414 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.425556 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.431109 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.432900 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-m2hcw"] Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.499247 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96a099e-2226-4031-af7f-f912a3d47075" path="/var/lib/kubelet/pods/b96a099e-2226-4031-af7f-f912a3d47075/volumes" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.499950 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30371f2-28b9-4f01-b26d-d04927a44679" path="/var/lib/kubelet/pods/f30371f2-28b9-4f01-b26d-d04927a44679/volumes" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.604021 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8984ae-8f7b-469a-b238-bffc1453f4ce-serving-cert\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.604107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-config\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.604136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-proxy-ca-bundles\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.604190 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj2ts\" (UniqueName: \"kubernetes.io/projected/db8984ae-8f7b-469a-b238-bffc1453f4ce-kube-api-access-tj2ts\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.604209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-client-ca\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.705044 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-config\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.705127 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-proxy-ca-bundles\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.705192 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj2ts\" (UniqueName: \"kubernetes.io/projected/db8984ae-8f7b-469a-b238-bffc1453f4ce-kube-api-access-tj2ts\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.705217 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-client-ca\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.705262 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8984ae-8f7b-469a-b238-bffc1453f4ce-serving-cert\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.706362 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-proxy-ca-bundles\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.706435 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-client-ca\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.706581 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-config\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.713545 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8984ae-8f7b-469a-b238-bffc1453f4ce-serving-cert\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.722353 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj2ts\" (UniqueName: \"kubernetes.io/projected/db8984ae-8f7b-469a-b238-bffc1453f4ce-kube-api-access-tj2ts\") pod \"controller-manager-cc8d77586-m2hcw\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:36 crc kubenswrapper[4835]: I0129 15:33:36.741324 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.182047 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-m2hcw"] Jan 29 15:33:37 crc kubenswrapper[4835]: W0129 15:33:37.186058 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb8984ae_8f7b_469a_b238_bffc1453f4ce.slice/crio-3fd16956c5d88b5f2552447a43c6c193652cd330916334be087c18a19f14f34a WatchSource:0}: Error finding container 3fd16956c5d88b5f2552447a43c6c193652cd330916334be087c18a19f14f34a: Status 404 returned error can't find the container with id 3fd16956c5d88b5f2552447a43c6c193652cd330916334be087c18a19f14f34a Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.920169 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" event={"ID":"db8984ae-8f7b-469a-b238-bffc1453f4ce","Type":"ContainerStarted","Data":"93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e"} Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.920518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" event={"ID":"db8984ae-8f7b-469a-b238-bffc1453f4ce","Type":"ContainerStarted","Data":"3fd16956c5d88b5f2552447a43c6c193652cd330916334be087c18a19f14f34a"} Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.920533 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.922829 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.924022 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.924058 4835 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ab4b0691cd20b735e3322001cfa7d90918c4a86f76ed3bd4ab3f1f61d51041a7" exitCode=137 Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.924152 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ab4b0691cd20b735e3322001cfa7d90918c4a86f76ed3bd4ab3f1f61d51041a7"} Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.924268 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6761f3d923401d65ef1eaaa56e22d2ebb56059dba04e7a7f8d7ff74c48a9aae2"} Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.924316 4835 scope.go:117] "RemoveContainer" containerID="70ceece315ebe16ee7272f0b9487042869db90659a0312618c58ed94509e929c" Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.925324 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:33:37 crc kubenswrapper[4835]: I0129 15:33:37.944611 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" podStartSLOduration=4.944592393 podStartE2EDuration="4.944592393s" podCreationTimestamp="2026-01-29 15:33:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:33:37.944234624 +0000 UTC m=+320.137278308" watchObservedRunningTime="2026-01-29 15:33:37.944592393 +0000 UTC m=+320.137636047" Jan 29 15:33:38 crc kubenswrapper[4835]: I0129 15:33:38.932057 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 29 15:33:40 crc kubenswrapper[4835]: I0129 15:33:40.219242 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:33:47 crc kubenswrapper[4835]: I0129 15:33:47.593499 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:33:47 crc kubenswrapper[4835]: I0129 15:33:47.597814 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:33:47 crc kubenswrapper[4835]: I0129 15:33:47.988775 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.092289 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-m2hcw"] Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.093627 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" podUID="db8984ae-8f7b-469a-b238-bffc1453f4ce" containerName="controller-manager" containerID="cri-o://93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e" gracePeriod=30 Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.612870 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.803136 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8984ae-8f7b-469a-b238-bffc1453f4ce-serving-cert\") pod \"db8984ae-8f7b-469a-b238-bffc1453f4ce\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.803302 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-config\") pod \"db8984ae-8f7b-469a-b238-bffc1453f4ce\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.803393 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-proxy-ca-bundles\") pod \"db8984ae-8f7b-469a-b238-bffc1453f4ce\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.803526 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-client-ca\") pod \"db8984ae-8f7b-469a-b238-bffc1453f4ce\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.803573 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj2ts\" (UniqueName: \"kubernetes.io/projected/db8984ae-8f7b-469a-b238-bffc1453f4ce-kube-api-access-tj2ts\") pod \"db8984ae-8f7b-469a-b238-bffc1453f4ce\" (UID: \"db8984ae-8f7b-469a-b238-bffc1453f4ce\") " Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.804374 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db8984ae-8f7b-469a-b238-bffc1453f4ce" (UID: "db8984ae-8f7b-469a-b238-bffc1453f4ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.804420 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "db8984ae-8f7b-469a-b238-bffc1453f4ce" (UID: "db8984ae-8f7b-469a-b238-bffc1453f4ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.804544 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-config" (OuterVolumeSpecName: "config") pod "db8984ae-8f7b-469a-b238-bffc1453f4ce" (UID: "db8984ae-8f7b-469a-b238-bffc1453f4ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.810859 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db8984ae-8f7b-469a-b238-bffc1453f4ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db8984ae-8f7b-469a-b238-bffc1453f4ce" (UID: "db8984ae-8f7b-469a-b238-bffc1453f4ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.811310 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db8984ae-8f7b-469a-b238-bffc1453f4ce-kube-api-access-tj2ts" (OuterVolumeSpecName: "kube-api-access-tj2ts") pod "db8984ae-8f7b-469a-b238-bffc1453f4ce" (UID: "db8984ae-8f7b-469a-b238-bffc1453f4ce"). InnerVolumeSpecName "kube-api-access-tj2ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.905196 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.905265 4835 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.905277 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db8984ae-8f7b-469a-b238-bffc1453f4ce-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.905290 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj2ts\" (UniqueName: \"kubernetes.io/projected/db8984ae-8f7b-469a-b238-bffc1453f4ce-kube-api-access-tj2ts\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:13 crc kubenswrapper[4835]: I0129 15:34:13.905300 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db8984ae-8f7b-469a-b238-bffc1453f4ce-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.152003 4835 generic.go:334] "Generic (PLEG): container finished" podID="db8984ae-8f7b-469a-b238-bffc1453f4ce" containerID="93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e" exitCode=0 Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.152109 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.152270 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" event={"ID":"db8984ae-8f7b-469a-b238-bffc1453f4ce","Type":"ContainerDied","Data":"93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e"} Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.152371 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cc8d77586-m2hcw" event={"ID":"db8984ae-8f7b-469a-b238-bffc1453f4ce","Type":"ContainerDied","Data":"3fd16956c5d88b5f2552447a43c6c193652cd330916334be087c18a19f14f34a"} Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.152400 4835 scope.go:117] "RemoveContainer" containerID="93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.182048 4835 scope.go:117] "RemoveContainer" containerID="93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e" Jan 29 15:34:14 crc kubenswrapper[4835]: E0129 15:34:14.183046 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e\": container with ID starting with 93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e not found: ID does not exist" containerID="93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.183102 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e"} err="failed to get container status \"93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e\": rpc error: code = NotFound desc = could not find container \"93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e\": container with ID starting with 93d112200915cd6f325d13f7e8525043702f0b25a7a9ea55cdc8f98eb674605e not found: ID does not exist" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.191689 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-m2hcw"] Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.194968 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cc8d77586-m2hcw"] Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.442246 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-zksrq"] Jan 29 15:34:14 crc kubenswrapper[4835]: E0129 15:34:14.442553 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db8984ae-8f7b-469a-b238-bffc1453f4ce" containerName="controller-manager" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.442568 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="db8984ae-8f7b-469a-b238-bffc1453f4ce" containerName="controller-manager" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.442681 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="db8984ae-8f7b-469a-b238-bffc1453f4ce" containerName="controller-manager" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.443181 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.446410 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.447315 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.448601 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.448646 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.448737 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.448737 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.458318 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.469807 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-zksrq"] Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.503376 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db8984ae-8f7b-469a-b238-bffc1453f4ce" path="/var/lib/kubelet/pods/db8984ae-8f7b-469a-b238-bffc1453f4ce/volumes" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.512658 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-client-ca\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.513021 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-config\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.513082 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c15561-f4d3-4947-be24-eba196d21843-serving-cert\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.614993 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xb2j\" (UniqueName: \"kubernetes.io/projected/b5c15561-f4d3-4947-be24-eba196d21843-kube-api-access-5xb2j\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.615086 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-proxy-ca-bundles\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.615144 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-config\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.615251 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c15561-f4d3-4947-be24-eba196d21843-serving-cert\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.615357 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-client-ca\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.617219 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-config\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.617519 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-client-ca\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.620085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5c15561-f4d3-4947-be24-eba196d21843-serving-cert\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.716257 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xb2j\" (UniqueName: \"kubernetes.io/projected/b5c15561-f4d3-4947-be24-eba196d21843-kube-api-access-5xb2j\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.716346 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-proxy-ca-bundles\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.717751 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b5c15561-f4d3-4947-be24-eba196d21843-proxy-ca-bundles\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.748100 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xb2j\" (UniqueName: \"kubernetes.io/projected/b5c15561-f4d3-4947-be24-eba196d21843-kube-api-access-5xb2j\") pod \"controller-manager-79cc795f77-zksrq\" (UID: \"b5c15561-f4d3-4947-be24-eba196d21843\") " pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.761956 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:14 crc kubenswrapper[4835]: I0129 15:34:14.971347 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79cc795f77-zksrq"] Jan 29 15:34:15 crc kubenswrapper[4835]: I0129 15:34:15.162296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" event={"ID":"b5c15561-f4d3-4947-be24-eba196d21843","Type":"ContainerStarted","Data":"47c515a7c2a54a2a3fa6eec1a7506c46e6156e75c48bac1861569d673d23724b"} Jan 29 15:34:15 crc kubenswrapper[4835]: I0129 15:34:15.162338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" event={"ID":"b5c15561-f4d3-4947-be24-eba196d21843","Type":"ContainerStarted","Data":"e3e39473db2d036853999ab5f1ad229c90f106ef45112efff0e71ba71ef3631b"} Jan 29 15:34:15 crc kubenswrapper[4835]: I0129 15:34:15.163490 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:15 crc kubenswrapper[4835]: I0129 15:34:15.164338 4835 patch_prober.go:28] interesting pod/controller-manager-79cc795f77-zksrq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Jan 29 15:34:15 crc kubenswrapper[4835]: I0129 15:34:15.164376 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" podUID="b5c15561-f4d3-4947-be24-eba196d21843" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Jan 29 15:34:15 crc kubenswrapper[4835]: I0129 15:34:15.182109 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" podStartSLOduration=2.182088162 podStartE2EDuration="2.182088162s" podCreationTimestamp="2026-01-29 15:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:15.181453757 +0000 UTC m=+357.374497411" watchObservedRunningTime="2026-01-29 15:34:15.182088162 +0000 UTC m=+357.375131816" Jan 29 15:34:16 crc kubenswrapper[4835]: I0129 15:34:16.180698 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79cc795f77-zksrq" Jan 29 15:34:21 crc kubenswrapper[4835]: I0129 15:34:21.614922 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:21 crc kubenswrapper[4835]: I0129 15:34:21.616540 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.672167 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rpxmg"] Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.673636 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.691800 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rpxmg"] Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809213 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809269 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37fb0e18-b20c-4d05-8968-c9133a21aec8-registry-certificates\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809288 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37fb0e18-b20c-4d05-8968-c9133a21aec8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809323 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-bound-sa-token\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809564 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37fb0e18-b20c-4d05-8968-c9133a21aec8-trusted-ca\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809707 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37fb0e18-b20c-4d05-8968-c9133a21aec8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5cc6\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-kube-api-access-w5cc6\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.809819 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-registry-tls\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.832875 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911234 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37fb0e18-b20c-4d05-8968-c9133a21aec8-registry-certificates\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911283 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37fb0e18-b20c-4d05-8968-c9133a21aec8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911318 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-bound-sa-token\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911339 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37fb0e18-b20c-4d05-8968-c9133a21aec8-trusted-ca\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911368 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37fb0e18-b20c-4d05-8968-c9133a21aec8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911390 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5cc6\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-kube-api-access-w5cc6\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911415 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-registry-tls\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.911991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/37fb0e18-b20c-4d05-8968-c9133a21aec8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.912599 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37fb0e18-b20c-4d05-8968-c9133a21aec8-trusted-ca\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.912757 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/37fb0e18-b20c-4d05-8968-c9133a21aec8-registry-certificates\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.917602 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/37fb0e18-b20c-4d05-8968-c9133a21aec8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.917992 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-registry-tls\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.927632 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5cc6\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-kube-api-access-w5cc6\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.928309 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37fb0e18-b20c-4d05-8968-c9133a21aec8-bound-sa-token\") pod \"image-registry-66df7c8f76-rpxmg\" (UID: \"37fb0e18-b20c-4d05-8968-c9133a21aec8\") " pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:29 crc kubenswrapper[4835]: I0129 15:34:29.991767 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:30 crc kubenswrapper[4835]: I0129 15:34:30.377089 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-rpxmg"] Jan 29 15:34:30 crc kubenswrapper[4835]: W0129 15:34:30.387130 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37fb0e18_b20c_4d05_8968_c9133a21aec8.slice/crio-d05c22757b836755fad535a9ba6b6bb7f9c2fb359e237d97705fc70f522cfc2e WatchSource:0}: Error finding container d05c22757b836755fad535a9ba6b6bb7f9c2fb359e237d97705fc70f522cfc2e: Status 404 returned error can't find the container with id d05c22757b836755fad535a9ba6b6bb7f9c2fb359e237d97705fc70f522cfc2e Jan 29 15:34:31 crc kubenswrapper[4835]: I0129 15:34:31.263877 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" event={"ID":"37fb0e18-b20c-4d05-8968-c9133a21aec8","Type":"ContainerStarted","Data":"4659f250688851d51bb2a65968c974a3bcc07ff1ce4927472750aab00cfbe4f7"} Jan 29 15:34:31 crc kubenswrapper[4835]: I0129 15:34:31.264215 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:31 crc kubenswrapper[4835]: I0129 15:34:31.264228 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" event={"ID":"37fb0e18-b20c-4d05-8968-c9133a21aec8","Type":"ContainerStarted","Data":"d05c22757b836755fad535a9ba6b6bb7f9c2fb359e237d97705fc70f522cfc2e"} Jan 29 15:34:31 crc kubenswrapper[4835]: I0129 15:34:31.287052 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" podStartSLOduration=2.287034637 podStartE2EDuration="2.287034637s" podCreationTimestamp="2026-01-29 15:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:31.282990849 +0000 UTC m=+373.476034513" watchObservedRunningTime="2026-01-29 15:34:31.287034637 +0000 UTC m=+373.480078291" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.074526 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc"] Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.074950 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" podUID="2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" containerName="route-controller-manager" containerID="cri-o://e2c9b756bacb5a52a83290ae404b6df691a777d972f908fde5a558b430fd52f8" gracePeriod=30 Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.274420 4835 generic.go:334] "Generic (PLEG): container finished" podID="2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" containerID="e2c9b756bacb5a52a83290ae404b6df691a777d972f908fde5a558b430fd52f8" exitCode=0 Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.274517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" event={"ID":"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe","Type":"ContainerDied","Data":"e2c9b756bacb5a52a83290ae404b6df691a777d972f908fde5a558b430fd52f8"} Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.536788 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.663011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-client-ca\") pod \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.663142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqrpq\" (UniqueName: \"kubernetes.io/projected/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-kube-api-access-zqrpq\") pod \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.663175 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-serving-cert\") pod \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.663219 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-config\") pod \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\" (UID: \"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe\") " Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.664220 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-config" (OuterVolumeSpecName: "config") pod "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" (UID: "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.664493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-client-ca" (OuterVolumeSpecName: "client-ca") pod "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" (UID: "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.668314 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" (UID: "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.668748 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-kube-api-access-zqrpq" (OuterVolumeSpecName: "kube-api-access-zqrpq") pod "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" (UID: "2ac2ae29-e1f6-44ef-a3d0-f27571d99abe"). InnerVolumeSpecName "kube-api-access-zqrpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.764794 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqrpq\" (UniqueName: \"kubernetes.io/projected/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-kube-api-access-zqrpq\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.765082 4835 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.765163 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:33 crc kubenswrapper[4835]: I0129 15:34:33.765231 4835 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.281797 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" event={"ID":"2ac2ae29-e1f6-44ef-a3d0-f27571d99abe","Type":"ContainerDied","Data":"4bfb5d37b5d891aed4a8a323180e261f9db4f22386b7214570d7127379ce9153"} Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.281858 4835 scope.go:117] "RemoveContainer" containerID="e2c9b756bacb5a52a83290ae404b6df691a777d972f908fde5a558b430fd52f8" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.281885 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.316060 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc"] Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.320638 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79cb97567b-78blc"] Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.451448 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k"] Jan 29 15:34:34 crc kubenswrapper[4835]: E0129 15:34:34.451716 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" containerName="route-controller-manager" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.451730 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" containerName="route-controller-manager" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.451860 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" containerName="route-controller-manager" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.452270 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.454172 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.454236 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.454612 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.454741 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.454832 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.455326 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.467112 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k"] Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.498687 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ac2ae29-e1f6-44ef-a3d0-f27571d99abe" path="/var/lib/kubelet/pods/2ac2ae29-e1f6-44ef-a3d0-f27571d99abe/volumes" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.576046 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d42a72-3c1c-44db-bab2-bdfe0555b95c-config\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.576111 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbhh7\" (UniqueName: \"kubernetes.io/projected/35d42a72-3c1c-44db-bab2-bdfe0555b95c-kube-api-access-mbhh7\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.576240 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35d42a72-3c1c-44db-bab2-bdfe0555b95c-client-ca\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.576335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d42a72-3c1c-44db-bab2-bdfe0555b95c-serving-cert\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.677530 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d42a72-3c1c-44db-bab2-bdfe0555b95c-serving-cert\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.677681 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d42a72-3c1c-44db-bab2-bdfe0555b95c-config\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.677722 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbhh7\" (UniqueName: \"kubernetes.io/projected/35d42a72-3c1c-44db-bab2-bdfe0555b95c-kube-api-access-mbhh7\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.677752 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35d42a72-3c1c-44db-bab2-bdfe0555b95c-client-ca\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.678960 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35d42a72-3c1c-44db-bab2-bdfe0555b95c-client-ca\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.679167 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35d42a72-3c1c-44db-bab2-bdfe0555b95c-config\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.688445 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35d42a72-3c1c-44db-bab2-bdfe0555b95c-serving-cert\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.694357 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbhh7\" (UniqueName: \"kubernetes.io/projected/35d42a72-3c1c-44db-bab2-bdfe0555b95c-kube-api-access-mbhh7\") pod \"route-controller-manager-6d75c4448c-rzk8k\" (UID: \"35d42a72-3c1c-44db-bab2-bdfe0555b95c\") " pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:34 crc kubenswrapper[4835]: I0129 15:34:34.769356 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.148760 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k"] Jan 29 15:34:35 crc kubenswrapper[4835]: W0129 15:34:35.155867 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d42a72_3c1c_44db_bab2_bdfe0555b95c.slice/crio-3addeea1f7742c56842f2b9587edcc269ff9a5c049f1c213ee48bd7d32767e79 WatchSource:0}: Error finding container 3addeea1f7742c56842f2b9587edcc269ff9a5c049f1c213ee48bd7d32767e79: Status 404 returned error can't find the container with id 3addeea1f7742c56842f2b9587edcc269ff9a5c049f1c213ee48bd7d32767e79 Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.299314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" event={"ID":"35d42a72-3c1c-44db-bab2-bdfe0555b95c","Type":"ContainerStarted","Data":"f7ecf59656969ebe772d833b70c036e1f8eed2de150019395099bebc479dbdb3"} Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.299785 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" event={"ID":"35d42a72-3c1c-44db-bab2-bdfe0555b95c","Type":"ContainerStarted","Data":"3addeea1f7742c56842f2b9587edcc269ff9a5c049f1c213ee48bd7d32767e79"} Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.299862 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.302669 4835 patch_prober.go:28] interesting pod/route-controller-manager-6d75c4448c-rzk8k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.302716 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" podUID="35d42a72-3c1c-44db-bab2-bdfe0555b95c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 29 15:34:35 crc kubenswrapper[4835]: I0129 15:34:35.324633 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" podStartSLOduration=2.324608337 podStartE2EDuration="2.324608337s" podCreationTimestamp="2026-01-29 15:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:35.319078453 +0000 UTC m=+377.512122117" watchObservedRunningTime="2026-01-29 15:34:35.324608337 +0000 UTC m=+377.517652011" Jan 29 15:34:36 crc kubenswrapper[4835]: I0129 15:34:36.311845 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d75c4448c-rzk8k" Jan 29 15:34:49 crc kubenswrapper[4835]: I0129 15:34:49.998090 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-rpxmg" Jan 29 15:34:50 crc kubenswrapper[4835]: I0129 15:34:50.117194 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cbsh8"] Jan 29 15:34:51 crc kubenswrapper[4835]: I0129 15:34:51.615362 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:34:51 crc kubenswrapper[4835]: I0129 15:34:51.615714 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.729103 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c9gq"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.730090 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7c9gq" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="registry-server" containerID="cri-o://77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962" gracePeriod=30 Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.735438 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kfhvt"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.735817 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kfhvt" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="registry-server" containerID="cri-o://77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac" gracePeriod=30 Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.752986 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfwcj"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.753582 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" podUID="0d653843-6818-4b28-a3d8-53407876dfe8" containerName="marketplace-operator" containerID="cri-o://5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2" gracePeriod=30 Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.771664 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsxd5"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.771915 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wsxd5" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="registry-server" containerID="cri-o://87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90" gracePeriod=30 Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.778075 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5xfqg"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.778260 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5xfqg" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="registry-server" containerID="cri-o://c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11" gracePeriod=30 Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.783611 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2bp"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.784340 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.790910 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2bp"] Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.830649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c7d43b2-a628-4671-8901-170372171693-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.830827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7d43b2-a628-4671-8901-170372171693-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.830901 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwb4r\" (UniqueName: \"kubernetes.io/projected/9c7d43b2-a628-4671-8901-170372171693-kube-api-access-rwb4r\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.932367 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c7d43b2-a628-4671-8901-170372171693-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.932451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7d43b2-a628-4671-8901-170372171693-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.932485 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwb4r\" (UniqueName: \"kubernetes.io/projected/9c7d43b2-a628-4671-8901-170372171693-kube-api-access-rwb4r\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.934156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7d43b2-a628-4671-8901-170372171693-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.940532 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9c7d43b2-a628-4671-8901-170372171693-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:56 crc kubenswrapper[4835]: I0129 15:34:56.962008 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwb4r\" (UniqueName: \"kubernetes.io/projected/9c7d43b2-a628-4671-8901-170372171693-kube-api-access-rwb4r\") pod \"marketplace-operator-79b997595-sl2bp\" (UID: \"9c7d43b2-a628-4671-8901-170372171693\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.193274 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.204697 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.260779 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.267150 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.268546 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.287670 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.339881 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2js2\" (UniqueName: \"kubernetes.io/projected/c397420c-528f-4c7a-ad5b-45767f0b7af1-kube-api-access-j2js2\") pod \"c397420c-528f-4c7a-ad5b-45767f0b7af1\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.339942 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-catalog-content\") pod \"c397420c-528f-4c7a-ad5b-45767f0b7af1\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.340035 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-utilities\") pod \"c397420c-528f-4c7a-ad5b-45767f0b7af1\" (UID: \"c397420c-528f-4c7a-ad5b-45767f0b7af1\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.343744 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-utilities" (OuterVolumeSpecName: "utilities") pod "c397420c-528f-4c7a-ad5b-45767f0b7af1" (UID: "c397420c-528f-4c7a-ad5b-45767f0b7af1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.349856 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c397420c-528f-4c7a-ad5b-45767f0b7af1-kube-api-access-j2js2" (OuterVolumeSpecName: "kube-api-access-j2js2") pod "c397420c-528f-4c7a-ad5b-45767f0b7af1" (UID: "c397420c-528f-4c7a-ad5b-45767f0b7af1"). InnerVolumeSpecName "kube-api-access-j2js2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.403083 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c397420c-528f-4c7a-ad5b-45767f0b7af1" (UID: "c397420c-528f-4c7a-ad5b-45767f0b7af1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440308 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerID="77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac" exitCode=0 Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440369 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfhvt" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440393 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfhvt" event={"ID":"3c3d86e8-b4be-4d4f-b844-516289b1b7ec","Type":"ContainerDied","Data":"77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440447 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfhvt" event={"ID":"3c3d86e8-b4be-4d4f-b844-516289b1b7ec","Type":"ContainerDied","Data":"e48dc508ba80ba64ee1c0d9f29f0587ebe41bcab4bb23c8dc8ece725c6d8837f"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440478 4835 scope.go:117] "RemoveContainer" containerID="77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440879 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcchx\" (UniqueName: \"kubernetes.io/projected/ea72779c-6414-41e3-9b36-9d64cf847442-kube-api-access-kcchx\") pod \"ea72779c-6414-41e3-9b36-9d64cf847442\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.440938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-utilities\") pod \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441032 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-trusted-ca\") pod \"0d653843-6818-4b28-a3d8-53407876dfe8\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441064 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cqbz\" (UniqueName: \"kubernetes.io/projected/0d653843-6818-4b28-a3d8-53407876dfe8-kube-api-access-9cqbz\") pod \"0d653843-6818-4b28-a3d8-53407876dfe8\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-catalog-content\") pod \"ea72779c-6414-41e3-9b36-9d64cf847442\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441153 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-utilities\") pod \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxskb\" (UniqueName: \"kubernetes.io/projected/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-kube-api-access-wxskb\") pod \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441227 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-utilities\") pod \"ea72779c-6414-41e3-9b36-9d64cf847442\" (UID: \"ea72779c-6414-41e3-9b36-9d64cf847442\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7ld\" (UniqueName: \"kubernetes.io/projected/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-kube-api-access-jk7ld\") pod \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441288 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-operator-metrics\") pod \"0d653843-6818-4b28-a3d8-53407876dfe8\" (UID: \"0d653843-6818-4b28-a3d8-53407876dfe8\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441316 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-catalog-content\") pod \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\" (UID: \"3ec7cda2-a20a-4b97-9bda-50a75e00b65b\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441348 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-catalog-content\") pod \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\" (UID: \"3c3d86e8-b4be-4d4f-b844-516289b1b7ec\") " Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441663 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2js2\" (UniqueName: \"kubernetes.io/projected/c397420c-528f-4c7a-ad5b-45767f0b7af1-kube-api-access-j2js2\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441684 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441703 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c397420c-528f-4c7a-ad5b-45767f0b7af1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.441826 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0d653843-6818-4b28-a3d8-53407876dfe8" (UID: "0d653843-6818-4b28-a3d8-53407876dfe8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.442148 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-utilities" (OuterVolumeSpecName: "utilities") pod "ea72779c-6414-41e3-9b36-9d64cf847442" (UID: "ea72779c-6414-41e3-9b36-9d64cf847442"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.442365 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-utilities" (OuterVolumeSpecName: "utilities") pod "3c3d86e8-b4be-4d4f-b844-516289b1b7ec" (UID: "3c3d86e8-b4be-4d4f-b844-516289b1b7ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.443552 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-utilities" (OuterVolumeSpecName: "utilities") pod "3ec7cda2-a20a-4b97-9bda-50a75e00b65b" (UID: "3ec7cda2-a20a-4b97-9bda-50a75e00b65b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.448420 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d653843-6818-4b28-a3d8-53407876dfe8" containerID="5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2" exitCode=0 Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.448557 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" event={"ID":"0d653843-6818-4b28-a3d8-53407876dfe8","Type":"ContainerDied","Data":"5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.448597 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" event={"ID":"0d653843-6818-4b28-a3d8-53407876dfe8","Type":"ContainerDied","Data":"8e48776a6f9d8bba470f2641d39e281936d01478f31492459d58bb542a3976d0"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.448667 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cfwcj" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.452628 4835 generic.go:334] "Generic (PLEG): container finished" podID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerID="77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962" exitCode=0 Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.452665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-kube-api-access-jk7ld" (OuterVolumeSpecName: "kube-api-access-jk7ld") pod "3c3d86e8-b4be-4d4f-b844-516289b1b7ec" (UID: "3c3d86e8-b4be-4d4f-b844-516289b1b7ec"). InnerVolumeSpecName "kube-api-access-jk7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.452699 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c9gq" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.452700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c9gq" event={"ID":"c397420c-528f-4c7a-ad5b-45767f0b7af1","Type":"ContainerDied","Data":"77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.452851 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c9gq" event={"ID":"c397420c-528f-4c7a-ad5b-45767f0b7af1","Type":"ContainerDied","Data":"e6b3ec2921863625af211ade5bf5884c136d30720bca2978d976740514bd9c32"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.455778 4835 generic.go:334] "Generic (PLEG): container finished" podID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerID="c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11" exitCode=0 Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.455890 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerDied","Data":"c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.455929 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5xfqg" event={"ID":"3ec7cda2-a20a-4b97-9bda-50a75e00b65b","Type":"ContainerDied","Data":"76083dc0bf1432de131d21eef43e382a4e96cb19658388ee00309f4f5783a0ef"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.456013 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5xfqg" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.460585 4835 generic.go:334] "Generic (PLEG): container finished" podID="ea72779c-6414-41e3-9b36-9d64cf847442" containerID="87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90" exitCode=0 Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.460639 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsxd5" event={"ID":"ea72779c-6414-41e3-9b36-9d64cf847442","Type":"ContainerDied","Data":"87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.460672 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wsxd5" event={"ID":"ea72779c-6414-41e3-9b36-9d64cf847442","Type":"ContainerDied","Data":"1212da8d80c0e1d744d66694d3c86512a47531dea4021064deccef0d724b2d23"} Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.460748 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wsxd5" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.462513 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0d653843-6818-4b28-a3d8-53407876dfe8" (UID: "0d653843-6818-4b28-a3d8-53407876dfe8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.462730 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-kube-api-access-wxskb" (OuterVolumeSpecName: "kube-api-access-wxskb") pod "3ec7cda2-a20a-4b97-9bda-50a75e00b65b" (UID: "3ec7cda2-a20a-4b97-9bda-50a75e00b65b"). InnerVolumeSpecName "kube-api-access-wxskb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.462811 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d653843-6818-4b28-a3d8-53407876dfe8-kube-api-access-9cqbz" (OuterVolumeSpecName: "kube-api-access-9cqbz") pod "0d653843-6818-4b28-a3d8-53407876dfe8" (UID: "0d653843-6818-4b28-a3d8-53407876dfe8"). InnerVolumeSpecName "kube-api-access-9cqbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.465183 4835 scope.go:117] "RemoveContainer" containerID="feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.467487 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea72779c-6414-41e3-9b36-9d64cf847442-kube-api-access-kcchx" (OuterVolumeSpecName: "kube-api-access-kcchx") pod "ea72779c-6414-41e3-9b36-9d64cf847442" (UID: "ea72779c-6414-41e3-9b36-9d64cf847442"). InnerVolumeSpecName "kube-api-access-kcchx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.475823 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea72779c-6414-41e3-9b36-9d64cf847442" (UID: "ea72779c-6414-41e3-9b36-9d64cf847442"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.488124 4835 scope.go:117] "RemoveContainer" containerID="c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.503006 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c9gq"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.510459 4835 scope.go:117] "RemoveContainer" containerID="77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.511118 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac\": container with ID starting with 77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac not found: ID does not exist" containerID="77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.511153 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac"} err="failed to get container status \"77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac\": rpc error: code = NotFound desc = could not find container \"77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac\": container with ID starting with 77482d79a4094317d2d1a83ccf950ecc8d20778bffd8aee87007233b32ed01ac not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.511177 4835 scope.go:117] "RemoveContainer" containerID="feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.511437 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f\": container with ID starting with feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f not found: ID does not exist" containerID="feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.511486 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f"} err="failed to get container status \"feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f\": rpc error: code = NotFound desc = could not find container \"feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f\": container with ID starting with feb3819c21965621015c7eee47618ece0b823593085868352d9656b72e238d3f not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.511507 4835 scope.go:117] "RemoveContainer" containerID="c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.511747 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f\": container with ID starting with c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f not found: ID does not exist" containerID="c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.511769 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f"} err="failed to get container status \"c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f\": rpc error: code = NotFound desc = could not find container \"c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f\": container with ID starting with c6b8abf96cdc655497436c7276e0c2b7f8e725aed9cf6d793cc6e5dcd7b32d2f not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.511784 4835 scope.go:117] "RemoveContainer" containerID="5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.514051 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7c9gq"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.530096 4835 scope.go:117] "RemoveContainer" containerID="5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.530843 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2\": container with ID starting with 5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2 not found: ID does not exist" containerID="5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.530892 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2"} err="failed to get container status \"5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2\": rpc error: code = NotFound desc = could not find container \"5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2\": container with ID starting with 5419cde13881938d16ffe775829f124f9051f13c8e686e258cdaaea1d66d1fa2 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.530924 4835 scope.go:117] "RemoveContainer" containerID="77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.538979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c3d86e8-b4be-4d4f-b844-516289b1b7ec" (UID: "3c3d86e8-b4be-4d4f-b844-516289b1b7ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543329 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543370 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cqbz\" (UniqueName: \"kubernetes.io/projected/0d653843-6818-4b28-a3d8-53407876dfe8-kube-api-access-9cqbz\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543381 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543391 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543402 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxskb\" (UniqueName: \"kubernetes.io/projected/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-kube-api-access-wxskb\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543414 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea72779c-6414-41e3-9b36-9d64cf847442-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543425 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7ld\" (UniqueName: \"kubernetes.io/projected/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-kube-api-access-jk7ld\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543436 4835 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d653843-6818-4b28-a3d8-53407876dfe8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543448 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c3d86e8-b4be-4d4f-b844-516289b1b7ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543461 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcchx\" (UniqueName: \"kubernetes.io/projected/ea72779c-6414-41e3-9b36-9d64cf847442-kube-api-access-kcchx\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.543490 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.545367 4835 scope.go:117] "RemoveContainer" containerID="d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.566606 4835 scope.go:117] "RemoveContainer" containerID="443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.579858 4835 scope.go:117] "RemoveContainer" containerID="77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.580651 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962\": container with ID starting with 77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962 not found: ID does not exist" containerID="77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.580694 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962"} err="failed to get container status \"77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962\": rpc error: code = NotFound desc = could not find container \"77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962\": container with ID starting with 77d5962dea50f2612826358302ccfe58f24f6be7b2dcc41d22387561fa8a9962 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.580721 4835 scope.go:117] "RemoveContainer" containerID="d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.581166 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6\": container with ID starting with d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6 not found: ID does not exist" containerID="d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.581208 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6"} err="failed to get container status \"d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6\": rpc error: code = NotFound desc = could not find container \"d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6\": container with ID starting with d850b9d4afcf349175b0430ed24cdecee76a602c3124ba308262761cb24aadc6 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.581248 4835 scope.go:117] "RemoveContainer" containerID="443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.581587 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30\": container with ID starting with 443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30 not found: ID does not exist" containerID="443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.581638 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30"} err="failed to get container status \"443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30\": rpc error: code = NotFound desc = could not find container \"443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30\": container with ID starting with 443479a9647903e3be0f835382cae06060c76ab2e8add6648a525233a3ac8c30 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.581653 4835 scope.go:117] "RemoveContainer" containerID="c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.593077 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ec7cda2-a20a-4b97-9bda-50a75e00b65b" (UID: "3ec7cda2-a20a-4b97-9bda-50a75e00b65b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.594768 4835 scope.go:117] "RemoveContainer" containerID="ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.615159 4835 scope.go:117] "RemoveContainer" containerID="7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.627417 4835 scope.go:117] "RemoveContainer" containerID="c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.627885 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11\": container with ID starting with c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11 not found: ID does not exist" containerID="c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.628001 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11"} err="failed to get container status \"c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11\": rpc error: code = NotFound desc = could not find container \"c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11\": container with ID starting with c8d923516eecb7f9c96191748e7ded130257d68df6de2d5e91ed967b42eb4b11 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.628100 4835 scope.go:117] "RemoveContainer" containerID="ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.628366 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7\": container with ID starting with ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7 not found: ID does not exist" containerID="ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.628449 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7"} err="failed to get container status \"ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7\": rpc error: code = NotFound desc = could not find container \"ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7\": container with ID starting with ae47625b862b9f69c00284c1b382cda2ed0afc1858887cbeff5913931b75dfa7 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.628550 4835 scope.go:117] "RemoveContainer" containerID="7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.628774 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81\": container with ID starting with 7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81 not found: ID does not exist" containerID="7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.628860 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81"} err="failed to get container status \"7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81\": rpc error: code = NotFound desc = could not find container \"7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81\": container with ID starting with 7cdf4e65e459ce95f22b345f0965f710db980b91592426601c759d8900478c81 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.628936 4835 scope.go:117] "RemoveContainer" containerID="87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.644176 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ec7cda2-a20a-4b97-9bda-50a75e00b65b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.649413 4835 scope.go:117] "RemoveContainer" containerID="59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.662509 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2bp"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.666428 4835 scope.go:117] "RemoveContainer" containerID="a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea" Jan 29 15:34:57 crc kubenswrapper[4835]: W0129 15:34:57.670900 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c7d43b2_a628_4671_8901_170372171693.slice/crio-ae69af2e4972810964670227fc5f853db324072427a644c6e5435c50fbfd5afe WatchSource:0}: Error finding container ae69af2e4972810964670227fc5f853db324072427a644c6e5435c50fbfd5afe: Status 404 returned error can't find the container with id ae69af2e4972810964670227fc5f853db324072427a644c6e5435c50fbfd5afe Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.680569 4835 scope.go:117] "RemoveContainer" containerID="87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.680930 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90\": container with ID starting with 87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90 not found: ID does not exist" containerID="87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.680979 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90"} err="failed to get container status \"87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90\": rpc error: code = NotFound desc = could not find container \"87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90\": container with ID starting with 87b2033a2816f81b54f73f30063e6c9041e0f4e83c5191cf247b326ff13f4f90 not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.681013 4835 scope.go:117] "RemoveContainer" containerID="59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.681310 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b\": container with ID starting with 59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b not found: ID does not exist" containerID="59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.681343 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b"} err="failed to get container status \"59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b\": rpc error: code = NotFound desc = could not find container \"59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b\": container with ID starting with 59c90b8b55be0156077c69271d8b20637d77c2bb3cece887465224951b9bf10b not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.681365 4835 scope.go:117] "RemoveContainer" containerID="a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea" Jan 29 15:34:57 crc kubenswrapper[4835]: E0129 15:34:57.681684 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea\": container with ID starting with a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea not found: ID does not exist" containerID="a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.681730 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea"} err="failed to get container status \"a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea\": rpc error: code = NotFound desc = could not find container \"a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea\": container with ID starting with a324e736700d9183cc1b193bf96caab525c4b1c85036cdc3bd242bf0904d73ea not found: ID does not exist" Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.778283 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kfhvt"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.781502 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kfhvt"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.806607 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfwcj"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.821523 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cfwcj"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.829388 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsxd5"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.835350 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wsxd5"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.839041 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5xfqg"] Jan 29 15:34:57 crc kubenswrapper[4835]: I0129 15:34:57.841833 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5xfqg"] Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.471368 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" event={"ID":"9c7d43b2-a628-4671-8901-170372171693","Type":"ContainerStarted","Data":"0e4571b9e6dde61035dbaee1a43d44cd54ca3caaf4653c2dccdbd128b9e03e7e"} Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.471610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" event={"ID":"9c7d43b2-a628-4671-8901-170372171693","Type":"ContainerStarted","Data":"ae69af2e4972810964670227fc5f853db324072427a644c6e5435c50fbfd5afe"} Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.471865 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.476068 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.491547 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sl2bp" podStartSLOduration=2.491526789 podStartE2EDuration="2.491526789s" podCreationTimestamp="2026-01-29 15:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:34:58.487681377 +0000 UTC m=+400.680725031" watchObservedRunningTime="2026-01-29 15:34:58.491526789 +0000 UTC m=+400.684570453" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.500721 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d653843-6818-4b28-a3d8-53407876dfe8" path="/var/lib/kubelet/pods/0d653843-6818-4b28-a3d8-53407876dfe8/volumes" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.501900 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" path="/var/lib/kubelet/pods/3c3d86e8-b4be-4d4f-b844-516289b1b7ec/volumes" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.503181 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" path="/var/lib/kubelet/pods/3ec7cda2-a20a-4b97-9bda-50a75e00b65b/volumes" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.504965 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" path="/var/lib/kubelet/pods/c397420c-528f-4c7a-ad5b-45767f0b7af1/volumes" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.506198 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" path="/var/lib/kubelet/pods/ea72779c-6414-41e3-9b36-9d64cf847442/volumes" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945036 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xsmv7"] Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945284 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945300 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945316 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945324 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945331 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945338 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945347 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945354 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945361 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945367 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945379 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945387 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945400 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d653843-6818-4b28-a3d8-53407876dfe8" containerName="marketplace-operator" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945407 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d653843-6818-4b28-a3d8-53407876dfe8" containerName="marketplace-operator" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945417 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945424 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945433 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945440 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945448 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945455 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945498 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945507 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945519 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945528 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="extract-utilities" Jan 29 15:34:58 crc kubenswrapper[4835]: E0129 15:34:58.945537 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945543 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="extract-content" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945649 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c397420c-528f-4c7a-ad5b-45767f0b7af1" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945662 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3d86e8-b4be-4d4f-b844-516289b1b7ec" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945675 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea72779c-6414-41e3-9b36-9d64cf847442" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945685 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d653843-6818-4b28-a3d8-53407876dfe8" containerName="marketplace-operator" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.945697 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec7cda2-a20a-4b97-9bda-50a75e00b65b" containerName="registry-server" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.946603 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.948275 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 15:34:58 crc kubenswrapper[4835]: I0129 15:34:58.956858 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xsmv7"] Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.073957 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b77bb8-df69-4504-ae68-5e327d4a3992-utilities\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.074030 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kv7f\" (UniqueName: \"kubernetes.io/projected/30b77bb8-df69-4504-ae68-5e327d4a3992-kube-api-access-7kv7f\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.074072 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b77bb8-df69-4504-ae68-5e327d4a3992-catalog-content\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.140175 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z8mjw"] Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.141266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.142798 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.161959 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8mjw"] Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.175123 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b77bb8-df69-4504-ae68-5e327d4a3992-catalog-content\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.175196 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b77bb8-df69-4504-ae68-5e327d4a3992-utilities\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.175314 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kv7f\" (UniqueName: \"kubernetes.io/projected/30b77bb8-df69-4504-ae68-5e327d4a3992-kube-api-access-7kv7f\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.175759 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b77bb8-df69-4504-ae68-5e327d4a3992-catalog-content\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.175815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b77bb8-df69-4504-ae68-5e327d4a3992-utilities\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.193618 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kv7f\" (UniqueName: \"kubernetes.io/projected/30b77bb8-df69-4504-ae68-5e327d4a3992-kube-api-access-7kv7f\") pod \"certified-operators-xsmv7\" (UID: \"30b77bb8-df69-4504-ae68-5e327d4a3992\") " pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.276822 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa95b30-fb32-42f1-a406-14562750b5c9-catalog-content\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.276887 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa95b30-fb32-42f1-a406-14562750b5c9-utilities\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.277061 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khsxf\" (UniqueName: \"kubernetes.io/projected/4fa95b30-fb32-42f1-a406-14562750b5c9-kube-api-access-khsxf\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.287907 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.378618 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khsxf\" (UniqueName: \"kubernetes.io/projected/4fa95b30-fb32-42f1-a406-14562750b5c9-kube-api-access-khsxf\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.378720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa95b30-fb32-42f1-a406-14562750b5c9-catalog-content\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.378738 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa95b30-fb32-42f1-a406-14562750b5c9-utilities\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.379692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa95b30-fb32-42f1-a406-14562750b5c9-catalog-content\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.379700 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa95b30-fb32-42f1-a406-14562750b5c9-utilities\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.401581 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khsxf\" (UniqueName: \"kubernetes.io/projected/4fa95b30-fb32-42f1-a406-14562750b5c9-kube-api-access-khsxf\") pod \"redhat-marketplace-z8mjw\" (UID: \"4fa95b30-fb32-42f1-a406-14562750b5c9\") " pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.459289 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xsmv7"] Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.459402 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:34:59 crc kubenswrapper[4835]: W0129 15:34:59.464927 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30b77bb8_df69_4504_ae68_5e327d4a3992.slice/crio-67c29751b44a4dc83bec376230cb35f3f3949af78f34bdbed62ca9b1000d9908 WatchSource:0}: Error finding container 67c29751b44a4dc83bec376230cb35f3f3949af78f34bdbed62ca9b1000d9908: Status 404 returned error can't find the container with id 67c29751b44a4dc83bec376230cb35f3f3949af78f34bdbed62ca9b1000d9908 Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.486301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsmv7" event={"ID":"30b77bb8-df69-4504-ae68-5e327d4a3992","Type":"ContainerStarted","Data":"67c29751b44a4dc83bec376230cb35f3f3949af78f34bdbed62ca9b1000d9908"} Jan 29 15:34:59 crc kubenswrapper[4835]: I0129 15:34:59.847968 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z8mjw"] Jan 29 15:34:59 crc kubenswrapper[4835]: W0129 15:34:59.862913 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa95b30_fb32_42f1_a406_14562750b5c9.slice/crio-46693a60fcea585449c16c3c79327c7e06abccbdc2ec0f73d08ba3b7fab079de WatchSource:0}: Error finding container 46693a60fcea585449c16c3c79327c7e06abccbdc2ec0f73d08ba3b7fab079de: Status 404 returned error can't find the container with id 46693a60fcea585449c16c3c79327c7e06abccbdc2ec0f73d08ba3b7fab079de Jan 29 15:35:00 crc kubenswrapper[4835]: I0129 15:35:00.492582 4835 generic.go:334] "Generic (PLEG): container finished" podID="4fa95b30-fb32-42f1-a406-14562750b5c9" containerID="3ce821ecc06c5aa950880eed242919e623e927cbfc7001712c250f5f71383793" exitCode=0 Jan 29 15:35:00 crc kubenswrapper[4835]: I0129 15:35:00.496730 4835 generic.go:334] "Generic (PLEG): container finished" podID="30b77bb8-df69-4504-ae68-5e327d4a3992" containerID="e67a976372d90e9e5963a544982bb8bfdb13887543e6523a89fdf80bb7a67665" exitCode=0 Jan 29 15:35:00 crc kubenswrapper[4835]: I0129 15:35:00.498593 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8mjw" event={"ID":"4fa95b30-fb32-42f1-a406-14562750b5c9","Type":"ContainerDied","Data":"3ce821ecc06c5aa950880eed242919e623e927cbfc7001712c250f5f71383793"} Jan 29 15:35:00 crc kubenswrapper[4835]: I0129 15:35:00.498635 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8mjw" event={"ID":"4fa95b30-fb32-42f1-a406-14562750b5c9","Type":"ContainerStarted","Data":"46693a60fcea585449c16c3c79327c7e06abccbdc2ec0f73d08ba3b7fab079de"} Jan 29 15:35:00 crc kubenswrapper[4835]: I0129 15:35:00.498652 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsmv7" event={"ID":"30b77bb8-df69-4504-ae68-5e327d4a3992","Type":"ContainerDied","Data":"e67a976372d90e9e5963a544982bb8bfdb13887543e6523a89fdf80bb7a67665"} Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.344136 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k5k76"] Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.345172 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.347079 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.347778 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5k76"] Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.407537 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm9fv\" (UniqueName: \"kubernetes.io/projected/44dfd56f-fd32-439c-9b01-0405d1947663-kube-api-access-fm9fv\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.407729 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dfd56f-fd32-439c-9b01-0405d1947663-utilities\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.407775 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dfd56f-fd32-439c-9b01-0405d1947663-catalog-content\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.508645 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm9fv\" (UniqueName: \"kubernetes.io/projected/44dfd56f-fd32-439c-9b01-0405d1947663-kube-api-access-fm9fv\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.509046 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dfd56f-fd32-439c-9b01-0405d1947663-utilities\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.509068 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dfd56f-fd32-439c-9b01-0405d1947663-catalog-content\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.509760 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44dfd56f-fd32-439c-9b01-0405d1947663-catalog-content\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.509875 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44dfd56f-fd32-439c-9b01-0405d1947663-utilities\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.531798 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm9fv\" (UniqueName: \"kubernetes.io/projected/44dfd56f-fd32-439c-9b01-0405d1947663-kube-api-access-fm9fv\") pod \"redhat-operators-k5k76\" (UID: \"44dfd56f-fd32-439c-9b01-0405d1947663\") " pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.548352 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9pqfh"] Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.549415 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.552419 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.555362 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9pqfh"] Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.664531 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.710895 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2799\" (UniqueName: \"kubernetes.io/projected/1b2510b4-6dfe-4752-be13-cf4693bc139c-kube-api-access-p2799\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.711177 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2510b4-6dfe-4752-be13-cf4693bc139c-catalog-content\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.711201 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2510b4-6dfe-4752-be13-cf4693bc139c-utilities\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.813685 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2510b4-6dfe-4752-be13-cf4693bc139c-catalog-content\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.813994 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2510b4-6dfe-4752-be13-cf4693bc139c-utilities\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.814050 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2799\" (UniqueName: \"kubernetes.io/projected/1b2510b4-6dfe-4752-be13-cf4693bc139c-kube-api-access-p2799\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.814292 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b2510b4-6dfe-4752-be13-cf4693bc139c-catalog-content\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.815077 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b2510b4-6dfe-4752-be13-cf4693bc139c-utilities\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.838341 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2799\" (UniqueName: \"kubernetes.io/projected/1b2510b4-6dfe-4752-be13-cf4693bc139c-kube-api-access-p2799\") pod \"community-operators-9pqfh\" (UID: \"1b2510b4-6dfe-4752-be13-cf4693bc139c\") " pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:01 crc kubenswrapper[4835]: I0129 15:35:01.874204 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.089339 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k5k76"] Jan 29 15:35:02 crc kubenswrapper[4835]: W0129 15:35:02.094050 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44dfd56f_fd32_439c_9b01_0405d1947663.slice/crio-91e9433de6bbdc876ec76ad7fb0eca2f585bd008556c699cd3080759a1c5bc57 WatchSource:0}: Error finding container 91e9433de6bbdc876ec76ad7fb0eca2f585bd008556c699cd3080759a1c5bc57: Status 404 returned error can't find the container with id 91e9433de6bbdc876ec76ad7fb0eca2f585bd008556c699cd3080759a1c5bc57 Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.269288 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9pqfh"] Jan 29 15:35:02 crc kubenswrapper[4835]: W0129 15:35:02.320806 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b2510b4_6dfe_4752_be13_cf4693bc139c.slice/crio-d5b512ad0c5298846c1e75ab1020c6e2cea23708c8d00370e8236e9c720af0aa WatchSource:0}: Error finding container d5b512ad0c5298846c1e75ab1020c6e2cea23708c8d00370e8236e9c720af0aa: Status 404 returned error can't find the container with id d5b512ad0c5298846c1e75ab1020c6e2cea23708c8d00370e8236e9c720af0aa Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.508725 4835 generic.go:334] "Generic (PLEG): container finished" podID="44dfd56f-fd32-439c-9b01-0405d1947663" containerID="907fec1d6f47a089afb5a6b429e5cc87e4744a120cc85e7fecdf22e6f25f9a40" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.508824 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5k76" event={"ID":"44dfd56f-fd32-439c-9b01-0405d1947663","Type":"ContainerDied","Data":"907fec1d6f47a089afb5a6b429e5cc87e4744a120cc85e7fecdf22e6f25f9a40"} Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.508899 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5k76" event={"ID":"44dfd56f-fd32-439c-9b01-0405d1947663","Type":"ContainerStarted","Data":"91e9433de6bbdc876ec76ad7fb0eca2f585bd008556c699cd3080759a1c5bc57"} Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.511008 4835 generic.go:334] "Generic (PLEG): container finished" podID="1b2510b4-6dfe-4752-be13-cf4693bc139c" containerID="935dcb9e336d6747b14bf0ac615c3d12e876442601f488621cc6c440414e11e0" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.511053 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pqfh" event={"ID":"1b2510b4-6dfe-4752-be13-cf4693bc139c","Type":"ContainerDied","Data":"935dcb9e336d6747b14bf0ac615c3d12e876442601f488621cc6c440414e11e0"} Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.511101 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pqfh" event={"ID":"1b2510b4-6dfe-4752-be13-cf4693bc139c","Type":"ContainerStarted","Data":"d5b512ad0c5298846c1e75ab1020c6e2cea23708c8d00370e8236e9c720af0aa"} Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.513533 4835 generic.go:334] "Generic (PLEG): container finished" podID="30b77bb8-df69-4504-ae68-5e327d4a3992" containerID="116a697cbecb5d4be9a9efb4929a594b1bf1253874a89250e1df4b1e9edd0efa" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.513607 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsmv7" event={"ID":"30b77bb8-df69-4504-ae68-5e327d4a3992","Type":"ContainerDied","Data":"116a697cbecb5d4be9a9efb4929a594b1bf1253874a89250e1df4b1e9edd0efa"} Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.516183 4835 generic.go:334] "Generic (PLEG): container finished" podID="4fa95b30-fb32-42f1-a406-14562750b5c9" containerID="96bfdb6c18cfaca974a4e06669b5a661c1fb42da564e51e432ae89805632ff2d" exitCode=0 Jan 29 15:35:02 crc kubenswrapper[4835]: I0129 15:35:02.516220 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8mjw" event={"ID":"4fa95b30-fb32-42f1-a406-14562750b5c9","Type":"ContainerDied","Data":"96bfdb6c18cfaca974a4e06669b5a661c1fb42da564e51e432ae89805632ff2d"} Jan 29 15:35:03 crc kubenswrapper[4835]: I0129 15:35:03.524044 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pqfh" event={"ID":"1b2510b4-6dfe-4752-be13-cf4693bc139c","Type":"ContainerStarted","Data":"a64c05071199c1e8591f44019c446b00c0cd4f8592f7f333f9b463f16a9e01bb"} Jan 29 15:35:03 crc kubenswrapper[4835]: I0129 15:35:03.528063 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xsmv7" event={"ID":"30b77bb8-df69-4504-ae68-5e327d4a3992","Type":"ContainerStarted","Data":"877c8f2504bd95cbada789766a0e2c7c56a410eac42a5cbe95ad7f28b4d5dd61"} Jan 29 15:35:03 crc kubenswrapper[4835]: I0129 15:35:03.530855 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z8mjw" event={"ID":"4fa95b30-fb32-42f1-a406-14562750b5c9","Type":"ContainerStarted","Data":"71a3b18cbacee1e5d74ee0e5778346051318f76378d5bd611851500079b29e31"} Jan 29 15:35:03 crc kubenswrapper[4835]: I0129 15:35:03.532879 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5k76" event={"ID":"44dfd56f-fd32-439c-9b01-0405d1947663","Type":"ContainerStarted","Data":"11037caab5565ab92d68d2f779220bebcf2cdc45847cf5093934a4d4702e4338"} Jan 29 15:35:03 crc kubenswrapper[4835]: I0129 15:35:03.590976 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z8mjw" podStartSLOduration=2.151951561 podStartE2EDuration="4.590954347s" podCreationTimestamp="2026-01-29 15:34:59 +0000 UTC" firstStartedPulling="2026-01-29 15:35:00.495692205 +0000 UTC m=+402.688735859" lastFinishedPulling="2026-01-29 15:35:02.934694991 +0000 UTC m=+405.127738645" observedRunningTime="2026-01-29 15:35:03.589655953 +0000 UTC m=+405.782699617" watchObservedRunningTime="2026-01-29 15:35:03.590954347 +0000 UTC m=+405.783998011" Jan 29 15:35:03 crc kubenswrapper[4835]: I0129 15:35:03.593602 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xsmv7" podStartSLOduration=3.14399379 podStartE2EDuration="5.593588997s" podCreationTimestamp="2026-01-29 15:34:58 +0000 UTC" firstStartedPulling="2026-01-29 15:35:00.498279233 +0000 UTC m=+402.691322887" lastFinishedPulling="2026-01-29 15:35:02.94787444 +0000 UTC m=+405.140918094" observedRunningTime="2026-01-29 15:35:03.565175474 +0000 UTC m=+405.758219128" watchObservedRunningTime="2026-01-29 15:35:03.593588997 +0000 UTC m=+405.786632651" Jan 29 15:35:04 crc kubenswrapper[4835]: I0129 15:35:04.539807 4835 generic.go:334] "Generic (PLEG): container finished" podID="44dfd56f-fd32-439c-9b01-0405d1947663" containerID="11037caab5565ab92d68d2f779220bebcf2cdc45847cf5093934a4d4702e4338" exitCode=0 Jan 29 15:35:04 crc kubenswrapper[4835]: I0129 15:35:04.540139 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5k76" event={"ID":"44dfd56f-fd32-439c-9b01-0405d1947663","Type":"ContainerDied","Data":"11037caab5565ab92d68d2f779220bebcf2cdc45847cf5093934a4d4702e4338"} Jan 29 15:35:04 crc kubenswrapper[4835]: I0129 15:35:04.542987 4835 generic.go:334] "Generic (PLEG): container finished" podID="1b2510b4-6dfe-4752-be13-cf4693bc139c" containerID="a64c05071199c1e8591f44019c446b00c0cd4f8592f7f333f9b463f16a9e01bb" exitCode=0 Jan 29 15:35:04 crc kubenswrapper[4835]: I0129 15:35:04.544021 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pqfh" event={"ID":"1b2510b4-6dfe-4752-be13-cf4693bc139c","Type":"ContainerDied","Data":"a64c05071199c1e8591f44019c446b00c0cd4f8592f7f333f9b463f16a9e01bb"} Jan 29 15:35:05 crc kubenswrapper[4835]: I0129 15:35:05.553036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k5k76" event={"ID":"44dfd56f-fd32-439c-9b01-0405d1947663","Type":"ContainerStarted","Data":"4f28f4378a2c692b4416e9e33b78e0d358b46e86e070597dc1f9239728f51fe9"} Jan 29 15:35:05 crc kubenswrapper[4835]: I0129 15:35:05.559320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pqfh" event={"ID":"1b2510b4-6dfe-4752-be13-cf4693bc139c","Type":"ContainerStarted","Data":"f00ee4b9fa3a41d28953cd6fbb62b2e75eef8f83c69004d169d15167854b0300"} Jan 29 15:35:05 crc kubenswrapper[4835]: I0129 15:35:05.576062 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k5k76" podStartSLOduration=2.123879152 podStartE2EDuration="4.576038318s" podCreationTimestamp="2026-01-29 15:35:01 +0000 UTC" firstStartedPulling="2026-01-29 15:35:02.511280663 +0000 UTC m=+404.704324307" lastFinishedPulling="2026-01-29 15:35:04.963439819 +0000 UTC m=+407.156483473" observedRunningTime="2026-01-29 15:35:05.57155565 +0000 UTC m=+407.764599314" watchObservedRunningTime="2026-01-29 15:35:05.576038318 +0000 UTC m=+407.769081972" Jan 29 15:35:05 crc kubenswrapper[4835]: I0129 15:35:05.592679 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9pqfh" podStartSLOduration=2.12374737 podStartE2EDuration="4.592660079s" podCreationTimestamp="2026-01-29 15:35:01 +0000 UTC" firstStartedPulling="2026-01-29 15:35:02.512775993 +0000 UTC m=+404.705819647" lastFinishedPulling="2026-01-29 15:35:04.981688702 +0000 UTC m=+407.174732356" observedRunningTime="2026-01-29 15:35:05.590597364 +0000 UTC m=+407.783641018" watchObservedRunningTime="2026-01-29 15:35:05.592660079 +0000 UTC m=+407.785703733" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.288784 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.289420 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.331376 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.459778 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.460074 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.497930 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.620761 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xsmv7" Jan 29 15:35:09 crc kubenswrapper[4835]: I0129 15:35:09.621171 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z8mjw" Jan 29 15:35:11 crc kubenswrapper[4835]: I0129 15:35:11.665058 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:11 crc kubenswrapper[4835]: I0129 15:35:11.665293 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:11 crc kubenswrapper[4835]: I0129 15:35:11.706064 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:11 crc kubenswrapper[4835]: I0129 15:35:11.875085 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:11 crc kubenswrapper[4835]: I0129 15:35:11.875149 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:11 crc kubenswrapper[4835]: I0129 15:35:11.909231 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:12 crc kubenswrapper[4835]: I0129 15:35:12.638323 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9pqfh" Jan 29 15:35:12 crc kubenswrapper[4835]: I0129 15:35:12.641221 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k5k76" Jan 29 15:35:15 crc kubenswrapper[4835]: I0129 15:35:15.156018 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" podUID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" containerName="registry" containerID="cri-o://14a2bed51a9199cc44feeadffe1df61515df46cea2b9cde503d55590ed1094c7" gracePeriod=30 Jan 29 15:35:19 crc kubenswrapper[4835]: I0129 15:35:19.944490 4835 generic.go:334] "Generic (PLEG): container finished" podID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" containerID="14a2bed51a9199cc44feeadffe1df61515df46cea2b9cde503d55590ed1094c7" exitCode=0 Jan 29 15:35:19 crc kubenswrapper[4835]: I0129 15:35:19.944578 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" event={"ID":"7b49fcb5-205f-42cc-92a5-eb8190c06f15","Type":"ContainerDied","Data":"14a2bed51a9199cc44feeadffe1df61515df46cea2b9cde503d55590ed1094c7"} Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.161722 4835 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-cbsh8 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.24:5000/healthz\": dial tcp 10.217.0.24:5000: connect: connection refused" start-of-body= Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.161783 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" podUID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.24:5000/healthz\": dial tcp 10.217.0.24:5000: connect: connection refused" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.930489 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.935453 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b49fcb5-205f-42cc-92a5-eb8190c06f15-ca-trust-extracted\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.935515 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-tls\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.935539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s772\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-kube-api-access-4s772\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.935558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b49fcb5-205f-42cc-92a5-eb8190c06f15-installation-pull-secrets\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.935578 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-trusted-ca\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.935596 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-certificates\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.936382 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.936994 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.942033 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b49fcb5-205f-42cc-92a5-eb8190c06f15-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.943935 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-kube-api-access-4s772" (OuterVolumeSpecName: "kube-api-access-4s772") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "kube-api-access-4s772". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.945768 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.956048 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b49fcb5-205f-42cc-92a5-eb8190c06f15-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.962515 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" event={"ID":"7b49fcb5-205f-42cc-92a5-eb8190c06f15","Type":"ContainerDied","Data":"bd6fc3a2ef4c641c081139299eec055b58c559071563692c4da081f2ee30b9b4"} Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.962565 4835 scope.go:117] "RemoveContainer" containerID="14a2bed51a9199cc44feeadffe1df61515df46cea2b9cde503d55590ed1094c7" Jan 29 15:35:20 crc kubenswrapper[4835]: I0129 15:35:20.962681 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-cbsh8" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.036529 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-bound-sa-token\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.036847 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\" (UID: \"7b49fcb5-205f-42cc-92a5-eb8190c06f15\") " Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.037431 4835 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7b49fcb5-205f-42cc-92a5-eb8190c06f15-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.037460 4835 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.037520 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s772\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-kube-api-access-4s772\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.037534 4835 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7b49fcb5-205f-42cc-92a5-eb8190c06f15-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.037546 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.037556 4835 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7b49fcb5-205f-42cc-92a5-eb8190c06f15-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.039366 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.052776 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7b49fcb5-205f-42cc-92a5-eb8190c06f15" (UID: "7b49fcb5-205f-42cc-92a5-eb8190c06f15"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.138941 4835 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7b49fcb5-205f-42cc-92a5-eb8190c06f15-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.289956 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cbsh8"] Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.294238 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-cbsh8"] Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.614541 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.614607 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.614659 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.615215 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"227776fa5f6ebd91b55a005a4789693f07c81ccef3f25c3220b810063f24bf8b"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.615275 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://227776fa5f6ebd91b55a005a4789693f07c81ccef3f25c3220b810063f24bf8b" gracePeriod=600 Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.970507 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="227776fa5f6ebd91b55a005a4789693f07c81ccef3f25c3220b810063f24bf8b" exitCode=0 Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.970610 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"227776fa5f6ebd91b55a005a4789693f07c81ccef3f25c3220b810063f24bf8b"} Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.970979 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"960e02b9304e0850d4be3c3f7a1bf2be0c7557354cbe8ca7dadf98cff306bd9f"} Jan 29 15:35:21 crc kubenswrapper[4835]: I0129 15:35:21.971021 4835 scope.go:117] "RemoveContainer" containerID="0d16ee1480832405de16bc4374258d7628a66ede197c0bd46774558decf5ec6d" Jan 29 15:35:22 crc kubenswrapper[4835]: I0129 15:35:22.503640 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" path="/var/lib/kubelet/pods/7b49fcb5-205f-42cc-92a5-eb8190c06f15/volumes" Jan 29 15:37:21 crc kubenswrapper[4835]: I0129 15:37:21.614202 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:21 crc kubenswrapper[4835]: I0129 15:37:21.614737 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:37:51 crc kubenswrapper[4835]: I0129 15:37:51.614803 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:37:51 crc kubenswrapper[4835]: I0129 15:37:51.615433 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:21 crc kubenswrapper[4835]: I0129 15:38:21.615137 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:38:21 crc kubenswrapper[4835]: I0129 15:38:21.615762 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:38:21 crc kubenswrapper[4835]: I0129 15:38:21.615807 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:38:21 crc kubenswrapper[4835]: I0129 15:38:21.616372 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"960e02b9304e0850d4be3c3f7a1bf2be0c7557354cbe8ca7dadf98cff306bd9f"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:38:21 crc kubenswrapper[4835]: I0129 15:38:21.616422 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://960e02b9304e0850d4be3c3f7a1bf2be0c7557354cbe8ca7dadf98cff306bd9f" gracePeriod=600 Jan 29 15:38:22 crc kubenswrapper[4835]: I0129 15:38:22.745552 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="960e02b9304e0850d4be3c3f7a1bf2be0c7557354cbe8ca7dadf98cff306bd9f" exitCode=0 Jan 29 15:38:22 crc kubenswrapper[4835]: I0129 15:38:22.745613 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"960e02b9304e0850d4be3c3f7a1bf2be0c7557354cbe8ca7dadf98cff306bd9f"} Jan 29 15:38:22 crc kubenswrapper[4835]: I0129 15:38:22.745968 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"89fec8863a7ddbdcdb05e7552a62d6679e3dce4f50b4a33283b518f90565beb8"} Jan 29 15:38:22 crc kubenswrapper[4835]: I0129 15:38:22.746638 4835 scope.go:117] "RemoveContainer" containerID="227776fa5f6ebd91b55a005a4789693f07c81ccef3f25c3220b810063f24bf8b" Jan 29 15:40:51 crc kubenswrapper[4835]: I0129 15:40:51.614576 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:40:51 crc kubenswrapper[4835]: I0129 15:40:51.616356 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.169631 4835 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397292 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j9fl"] Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397668 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-controller" containerID="cri-o://10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397780 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="northd" containerID="cri-o://c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397755 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="nbdb" containerID="cri-o://1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397810 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-node" containerID="cri-o://c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397840 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-acl-logging" containerID="cri-o://aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397769 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="sbdb" containerID="cri-o://49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.397924 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.426784 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" containerID="cri-o://576d4fd3b9f9db3b346f0a358de77bf2faf241d81532ec4587d184d496fceb5d" gracePeriod=30 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.640508 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/4.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.641073 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/3.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.643188 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovn-acl-logging/0.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.643803 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovn-controller/0.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644157 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="576d4fd3b9f9db3b346f0a358de77bf2faf241d81532ec4587d184d496fceb5d" exitCode=2 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644181 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644189 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644197 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644203 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644209 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d" exitCode=0 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644215 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041" exitCode=143 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644221 4835 generic.go:334] "Generic (PLEG): container finished" podID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerID="10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf" exitCode=143 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644268 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"576d4fd3b9f9db3b346f0a358de77bf2faf241d81532ec4587d184d496fceb5d"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644296 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644306 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644325 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644334 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644343 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644351 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644360 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" event={"ID":"06588e6b-ac15-40f5-ae5d-03f95146dc91","Type":"ContainerDied","Data":"f80b9071f79d9408916ddeb8d4f18495e394ef0e7c22a82860b4e75c86f7a385"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644369 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f80b9071f79d9408916ddeb8d4f18495e394ef0e7c22a82860b4e75c86f7a385" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.644383 4835 scope.go:117] "RemoveContainer" containerID="2a204d1ec4beb65e1e87aea29504d39173190dd54186e43ee3ea9def7eabfabd" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.647047 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/2.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.649380 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/1.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.649432 4835 generic.go:334] "Generic (PLEG): container finished" podID="9537ec35-484e-4eeb-b081-e453750fb39e" containerID="45aa09a6978751ffa20695bfce6a11506e5ba3fac9f7b465cf9f981e178787a1" exitCode=2 Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.649506 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerDied","Data":"45aa09a6978751ffa20695bfce6a11506e5ba3fac9f7b465cf9f981e178787a1"} Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.649954 4835 scope.go:117] "RemoveContainer" containerID="45aa09a6978751ffa20695bfce6a11506e5ba3fac9f7b465cf9f981e178787a1" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.681355 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/4.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.684370 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovn-acl-logging/0.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.685270 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovn-controller/0.log" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.686517 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.687363 4835 scope.go:117] "RemoveContainer" containerID="a83349f4b11ed9851e70d8f4d33d40f1dec037d4f24f5f05c0b2cc2439c38759" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743710 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qh6np"] Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743900 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743913 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743921 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743927 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743934 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="northd" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743939 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="northd" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743945 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" containerName="registry" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743951 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" containerName="registry" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743959 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-node" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743965 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-node" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743973 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743980 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.743990 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kubecfg-setup" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.743995 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kubecfg-setup" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744002 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744007 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744014 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="nbdb" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744020 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="nbdb" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744027 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="sbdb" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744032 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="sbdb" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744042 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744049 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744058 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-acl-logging" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744064 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-acl-logging" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744161 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="sbdb" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744171 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744177 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="nbdb" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744184 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744190 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744195 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovn-acl-logging" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744203 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744211 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744217 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744231 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="kube-rbac-proxy-node" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744239 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b49fcb5-205f-42cc-92a5-eb8190c06f15" containerName="registry" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744247 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="northd" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744341 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744348 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: E0129 15:41:05.744356 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744361 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.744445 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" containerName="ovnkube-controller" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.745800 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787754 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-var-lib-cni-networks-ovn-kubernetes\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787807 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-openvswitch\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787827 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-bin\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787843 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-etc-openvswitch\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787881 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-ovn\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787906 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovn-node-metrics-cert\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787927 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-script-lib\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787945 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-kubelet\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787965 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-netd\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.787984 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-netns\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788001 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-var-lib-openvswitch\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788018 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-slash\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788035 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-systemd-units\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788057 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-config\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-systemd\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788104 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-env-overrides\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788126 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-ovn-kubernetes\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788146 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-node-log\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788164 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-log-socket\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788186 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkhxl\" (UniqueName: \"kubernetes.io/projected/06588e6b-ac15-40f5-ae5d-03f95146dc91-kube-api-access-lkhxl\") pod \"06588e6b-ac15-40f5-ae5d-03f95146dc91\" (UID: \"06588e6b-ac15-40f5-ae5d-03f95146dc91\") " Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788333 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-systemd-units\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788358 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-slash\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788381 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-log-socket\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788399 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2hk4\" (UniqueName: \"kubernetes.io/projected/4172bd4e-941d-469b-bf42-1f1bfc76531e-kube-api-access-z2hk4\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788420 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-cni-bin\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788435 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovn-node-metrics-cert\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788510 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-node-log\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788529 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-ovn\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788549 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-run-netns\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788566 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-env-overrides\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788585 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-var-lib-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788603 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-etc-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-kubelet\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788646 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788664 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovnkube-config\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788690 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788712 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-cni-netd\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788730 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-systemd\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788749 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovnkube-script-lib\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788770 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788869 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788901 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788917 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788933 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.788954 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791612 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791670 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791722 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791740 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791766 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-slash" (OuterVolumeSpecName: "host-slash") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.791785 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.792119 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.792526 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-log-socket" (OuterVolumeSpecName: "log-socket") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.792871 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.792900 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.792926 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-node-log" (OuterVolumeSpecName: "node-log") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.795809 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06588e6b-ac15-40f5-ae5d-03f95146dc91-kube-api-access-lkhxl" (OuterVolumeSpecName: "kube-api-access-lkhxl") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "kube-api-access-lkhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.803919 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.806594 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "06588e6b-ac15-40f5-ae5d-03f95146dc91" (UID: "06588e6b-ac15-40f5-ae5d-03f95146dc91"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890448 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-node-log\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890550 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-ovn\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890570 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-run-netns\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890566 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-node-log\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890589 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-env-overrides\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890654 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-var-lib-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890678 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-etc-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890726 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-kubelet\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890731 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-var-lib-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890754 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-run-netns\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890794 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovnkube-config\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890825 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-etc-openvswitch\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890925 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-kubelet\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.890985 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891034 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-cni-netd\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891052 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891057 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-systemd\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891084 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-systemd\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891106 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovnkube-script-lib\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891123 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-cni-netd\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891135 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891250 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-systemd-units\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891285 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-slash\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891323 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-log-socket\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891353 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2hk4\" (UniqueName: \"kubernetes.io/projected/4172bd4e-941d-469b-bf42-1f1bfc76531e-kube-api-access-z2hk4\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891382 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-cni-bin\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891403 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovn-node-metrics-cert\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891493 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-env-overrides\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891497 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-slash\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891546 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-log-socket\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891526 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891578 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-host-cni-bin\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891655 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891669 4835 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891679 4835 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891688 4835 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891697 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891706 4835 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891697 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-systemd-units\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891716 4835 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891814 4835 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891829 4835 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891839 4835 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891848 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkhxl\" (UniqueName: \"kubernetes.io/projected/06588e6b-ac15-40f5-ae5d-03f95146dc91-kube-api-access-lkhxl\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891860 4835 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891871 4835 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891882 4835 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891891 4835 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891900 4835 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891909 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891918 4835 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/06588e6b-ac15-40f5-ae5d-03f95146dc91-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891928 4835 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/06588e6b-ac15-40f5-ae5d-03f95146dc91-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891953 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovnkube-config\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.892049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovnkube-script-lib\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.891151 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4172bd4e-941d-469b-bf42-1f1bfc76531e-run-ovn\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.895530 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4172bd4e-941d-469b-bf42-1f1bfc76531e-ovn-node-metrics-cert\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:05 crc kubenswrapper[4835]: I0129 15:41:05.909681 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2hk4\" (UniqueName: \"kubernetes.io/projected/4172bd4e-941d-469b-bf42-1f1bfc76531e-kube-api-access-z2hk4\") pod \"ovnkube-node-qh6np\" (UID: \"4172bd4e-941d-469b-bf42-1f1bfc76531e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.060245 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:06 crc kubenswrapper[4835]: W0129 15:41:06.074908 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4172bd4e_941d_469b_bf42_1f1bfc76531e.slice/crio-2e74a60ae9ea4a57a4bcb735ea54101a6b026635c783f849aadd11368fa346ac WatchSource:0}: Error finding container 2e74a60ae9ea4a57a4bcb735ea54101a6b026635c783f849aadd11368fa346ac: Status 404 returned error can't find the container with id 2e74a60ae9ea4a57a4bcb735ea54101a6b026635c783f849aadd11368fa346ac Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.655298 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovnkube-controller/4.log" Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.657454 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovn-acl-logging/0.log" Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.657967 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-5j9fl_06588e6b-ac15-40f5-ae5d-03f95146dc91/ovn-controller/0.log" Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.658843 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5j9fl" Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.660437 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6sstt_9537ec35-484e-4eeb-b081-e453750fb39e/kube-multus/2.log" Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.660532 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6sstt" event={"ID":"9537ec35-484e-4eeb-b081-e453750fb39e","Type":"ContainerStarted","Data":"5a9d85faf593a2cf09a0e2e0ce13c57a5727c831d85b4b8477e205deebefc1c1"} Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.662264 4835 generic.go:334] "Generic (PLEG): container finished" podID="4172bd4e-941d-469b-bf42-1f1bfc76531e" containerID="333e1cab6ece8323947617529e0e78e567189d214655d1eb3086f94dcba12a29" exitCode=0 Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.662289 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerDied","Data":"333e1cab6ece8323947617529e0e78e567189d214655d1eb3086f94dcba12a29"} Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.662305 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"2e74a60ae9ea4a57a4bcb735ea54101a6b026635c783f849aadd11368fa346ac"} Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.741443 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j9fl"] Jan 29 15:41:06 crc kubenswrapper[4835]: I0129 15:41:06.745637 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5j9fl"] Jan 29 15:41:07 crc kubenswrapper[4835]: I0129 15:41:07.669911 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"b317004d3c573b18b50a645b73467746bb1100bab353d46dc1451cfb668e1fc5"} Jan 29 15:41:07 crc kubenswrapper[4835]: I0129 15:41:07.670165 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"06b7f5887b53c2b365feae16f58dd8d8bb7213fbb4134dc613b211a499e429ed"} Jan 29 15:41:07 crc kubenswrapper[4835]: I0129 15:41:07.670176 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"53e816ff3d8e75a3a8308a409003806191e0f2af206a714eed7b1dc13d464e63"} Jan 29 15:41:07 crc kubenswrapper[4835]: I0129 15:41:07.670185 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"acbcea5c69038303b37538712ef55494b6814022e203fc36bf2eb43d01f3b385"} Jan 29 15:41:07 crc kubenswrapper[4835]: I0129 15:41:07.670193 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"e9312064595d5a4c20ab3f489c59c9e57604803b7e79b927b1c3bb670b499cd0"} Jan 29 15:41:07 crc kubenswrapper[4835]: I0129 15:41:07.670201 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"3b24b885008a95c24b04012e0d44e15225ec121d77b8be4508a0a90cfe60d5c8"} Jan 29 15:41:08 crc kubenswrapper[4835]: I0129 15:41:08.498149 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06588e6b-ac15-40f5-ae5d-03f95146dc91" path="/var/lib/kubelet/pods/06588e6b-ac15-40f5-ae5d-03f95146dc91/volumes" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.125229 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-xst45"] Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.126282 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.128268 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.128357 4835 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-w7hf7" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.128268 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.128500 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.144322 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7m8\" (UniqueName: \"kubernetes.io/projected/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-kube-api-access-6g7m8\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.144380 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-node-mnt\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.144569 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-crc-storage\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.246055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-crc-storage\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.246128 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g7m8\" (UniqueName: \"kubernetes.io/projected/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-kube-api-access-6g7m8\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.246179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-node-mnt\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.246425 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-node-mnt\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.246853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-crc-storage\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.269126 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g7m8\" (UniqueName: \"kubernetes.io/projected/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-kube-api-access-6g7m8\") pod \"crc-storage-crc-xst45\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.440931 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: E0129 15:41:10.467435 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(717a394c7b8cce35f60065c58a2467e6455d522b2f4839c20737eec087b21d07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:41:10 crc kubenswrapper[4835]: E0129 15:41:10.467554 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(717a394c7b8cce35f60065c58a2467e6455d522b2f4839c20737eec087b21d07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: E0129 15:41:10.467585 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(717a394c7b8cce35f60065c58a2467e6455d522b2f4839c20737eec087b21d07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:10 crc kubenswrapper[4835]: E0129 15:41:10.467639 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xst45_crc-storage(86ae5892-5fd3-47e5-99cb-edf9ff7ca506)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xst45_crc-storage(86ae5892-5fd3-47e5-99cb-edf9ff7ca506)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(717a394c7b8cce35f60065c58a2467e6455d522b2f4839c20737eec087b21d07): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xst45" podUID="86ae5892-5fd3-47e5-99cb-edf9ff7ca506" Jan 29 15:41:10 crc kubenswrapper[4835]: I0129 15:41:10.695978 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"3f767c14738a68d62eccdf98caaef7ebf24fe0523a056e0fa364bfc1257b1476"} Jan 29 15:41:12 crc kubenswrapper[4835]: I0129 15:41:12.710934 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" event={"ID":"4172bd4e-941d-469b-bf42-1f1bfc76531e","Type":"ContainerStarted","Data":"86c850f049db30c1c4bd6158e775fa8f33690c22ff07fa90bdda566004a55e3e"} Jan 29 15:41:13 crc kubenswrapper[4835]: I0129 15:41:13.717667 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:13 crc kubenswrapper[4835]: I0129 15:41:13.717915 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:13 crc kubenswrapper[4835]: I0129 15:41:13.717991 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:13 crc kubenswrapper[4835]: I0129 15:41:13.742370 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:13 crc kubenswrapper[4835]: I0129 15:41:13.753999 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" podStartSLOduration=8.753973521 podStartE2EDuration="8.753973521s" podCreationTimestamp="2026-01-29 15:41:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:41:13.747762096 +0000 UTC m=+775.940805760" watchObservedRunningTime="2026-01-29 15:41:13.753973521 +0000 UTC m=+775.947017165" Jan 29 15:41:13 crc kubenswrapper[4835]: I0129 15:41:13.763729 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:15 crc kubenswrapper[4835]: I0129 15:41:15.017342 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xst45"] Jan 29 15:41:15 crc kubenswrapper[4835]: I0129 15:41:15.017503 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:15 crc kubenswrapper[4835]: I0129 15:41:15.018105 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:15 crc kubenswrapper[4835]: E0129 15:41:15.042956 4835 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(0e47e6f2e2ddc8ff496a2ff64b32806e14cd8d1ecbe43969731858a8c1ccae51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 15:41:15 crc kubenswrapper[4835]: E0129 15:41:15.043035 4835 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(0e47e6f2e2ddc8ff496a2ff64b32806e14cd8d1ecbe43969731858a8c1ccae51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:15 crc kubenswrapper[4835]: E0129 15:41:15.043065 4835 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(0e47e6f2e2ddc8ff496a2ff64b32806e14cd8d1ecbe43969731858a8c1ccae51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:15 crc kubenswrapper[4835]: E0129 15:41:15.043115 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-xst45_crc-storage(86ae5892-5fd3-47e5-99cb-edf9ff7ca506)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-xst45_crc-storage(86ae5892-5fd3-47e5-99cb-edf9ff7ca506)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-xst45_crc-storage_86ae5892-5fd3-47e5-99cb-edf9ff7ca506_0(0e47e6f2e2ddc8ff496a2ff64b32806e14cd8d1ecbe43969731858a8c1ccae51): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-xst45" podUID="86ae5892-5fd3-47e5-99cb-edf9ff7ca506" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.414733 4835 scope.go:117] "RemoveContainer" containerID="1d93e6f5e4b34263fabc783eed71d068a43a95352eb5654fc708b19e8c0df7a3" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.436686 4835 scope.go:117] "RemoveContainer" containerID="54ad4c93bc4de1d54fc3353300617b6ed821fc930ea3935e4d6870e445982579" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.453529 4835 scope.go:117] "RemoveContainer" containerID="49bdd4b9f5d7e12ef8e8fa2e7be51f60cce1a5beed741897eafcf684f4c7444c" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.485908 4835 scope.go:117] "RemoveContainer" containerID="10f4f32b2fe1cbd2e6ccc2b58ad0f5eb6f3eaa7b3701650af8be633609b20abf" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.508777 4835 scope.go:117] "RemoveContainer" containerID="c06c091c0ce58d5417676b327f27c8d99a55f254a994ad1acca94f2620eda591" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.528285 4835 scope.go:117] "RemoveContainer" containerID="c8ad477fc89b88173bb2f5b8e6dc553676b585af9a305ff51dcbefcff806191d" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.543946 4835 scope.go:117] "RemoveContainer" containerID="e8c786041c6a01c500e3bb7c6a7393c9abc06a687a4db787aad1ee84ebfae984" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.562729 4835 scope.go:117] "RemoveContainer" containerID="aa750a69917d222c270adebeab8129dc409fca825c1f92b11b7a2ea10b9f0041" Jan 29 15:41:19 crc kubenswrapper[4835]: I0129 15:41:19.575286 4835 scope.go:117] "RemoveContainer" containerID="576d4fd3b9f9db3b346f0a358de77bf2faf241d81532ec4587d184d496fceb5d" Jan 29 15:41:21 crc kubenswrapper[4835]: I0129 15:41:21.615115 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:41:21 crc kubenswrapper[4835]: I0129 15:41:21.615531 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:30 crc kubenswrapper[4835]: I0129 15:41:30.491671 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:30 crc kubenswrapper[4835]: I0129 15:41:30.492487 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:30 crc kubenswrapper[4835]: I0129 15:41:30.895840 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-xst45"] Jan 29 15:41:30 crc kubenswrapper[4835]: W0129 15:41:30.906197 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86ae5892_5fd3_47e5_99cb_edf9ff7ca506.slice/crio-83853c4793eeb2d840d1340d74c5b05915c35be5075dd05dbc00f66f190aaaec WatchSource:0}: Error finding container 83853c4793eeb2d840d1340d74c5b05915c35be5075dd05dbc00f66f190aaaec: Status 404 returned error can't find the container with id 83853c4793eeb2d840d1340d74c5b05915c35be5075dd05dbc00f66f190aaaec Jan 29 15:41:30 crc kubenswrapper[4835]: I0129 15:41:30.908939 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:41:31 crc kubenswrapper[4835]: I0129 15:41:31.819384 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xst45" event={"ID":"86ae5892-5fd3-47e5-99cb-edf9ff7ca506","Type":"ContainerStarted","Data":"83853c4793eeb2d840d1340d74c5b05915c35be5075dd05dbc00f66f190aaaec"} Jan 29 15:41:32 crc kubenswrapper[4835]: I0129 15:41:32.826442 4835 generic.go:334] "Generic (PLEG): container finished" podID="86ae5892-5fd3-47e5-99cb-edf9ff7ca506" containerID="3fa34a45193880383fe8e32afbdc69a3b2f5dbace98187bee5dd343a647277d9" exitCode=0 Jan 29 15:41:32 crc kubenswrapper[4835]: I0129 15:41:32.826551 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xst45" event={"ID":"86ae5892-5fd3-47e5-99cb-edf9ff7ca506","Type":"ContainerDied","Data":"3fa34a45193880383fe8e32afbdc69a3b2f5dbace98187bee5dd343a647277d9"} Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.016218 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.068244 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-node-mnt\") pod \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.068419 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "86ae5892-5fd3-47e5-99cb-edf9ff7ca506" (UID: "86ae5892-5fd3-47e5-99cb-edf9ff7ca506"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.068511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g7m8\" (UniqueName: \"kubernetes.io/projected/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-kube-api-access-6g7m8\") pod \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.068579 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-crc-storage\") pod \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\" (UID: \"86ae5892-5fd3-47e5-99cb-edf9ff7ca506\") " Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.068755 4835 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.073971 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-kube-api-access-6g7m8" (OuterVolumeSpecName: "kube-api-access-6g7m8") pod "86ae5892-5fd3-47e5-99cb-edf9ff7ca506" (UID: "86ae5892-5fd3-47e5-99cb-edf9ff7ca506"). InnerVolumeSpecName "kube-api-access-6g7m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.083524 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "86ae5892-5fd3-47e5-99cb-edf9ff7ca506" (UID: "86ae5892-5fd3-47e5-99cb-edf9ff7ca506"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.169967 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g7m8\" (UniqueName: \"kubernetes.io/projected/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-kube-api-access-6g7m8\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.170018 4835 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/86ae5892-5fd3-47e5-99cb-edf9ff7ca506-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.839077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-xst45" event={"ID":"86ae5892-5fd3-47e5-99cb-edf9ff7ca506","Type":"ContainerDied","Data":"83853c4793eeb2d840d1340d74c5b05915c35be5075dd05dbc00f66f190aaaec"} Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.839126 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83853c4793eeb2d840d1340d74c5b05915c35be5075dd05dbc00f66f190aaaec" Jan 29 15:41:34 crc kubenswrapper[4835]: I0129 15:41:34.839107 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-xst45" Jan 29 15:41:36 crc kubenswrapper[4835]: I0129 15:41:36.096400 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qh6np" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.202332 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm"] Jan 29 15:41:41 crc kubenswrapper[4835]: E0129 15:41:41.203061 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ae5892-5fd3-47e5-99cb-edf9ff7ca506" containerName="storage" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.203072 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ae5892-5fd3-47e5-99cb-edf9ff7ca506" containerName="storage" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.203159 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ae5892-5fd3-47e5-99cb-edf9ff7ca506" containerName="storage" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.203812 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.205784 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.212476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm"] Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.253679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.253744 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9th\" (UniqueName: \"kubernetes.io/projected/d892ba7a-74a8-4d73-942f-0eb27888b77f-kube-api-access-qx9th\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.253787 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.354401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx9th\" (UniqueName: \"kubernetes.io/projected/d892ba7a-74a8-4d73-942f-0eb27888b77f-kube-api-access-qx9th\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.354643 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.354753 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.355193 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.355319 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.375314 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx9th\" (UniqueName: \"kubernetes.io/projected/d892ba7a-74a8-4d73-942f-0eb27888b77f-kube-api-access-qx9th\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.534667 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:41:41 crc kubenswrapper[4835]: I0129 15:41:41.909089 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm"] Jan 29 15:41:42 crc kubenswrapper[4835]: I0129 15:41:42.885702 4835 generic.go:334] "Generic (PLEG): container finished" podID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerID="e6d371ea5fc473d4265dfa3e45c89b8285f620b2d82129fe5c1a94c2b3b731df" exitCode=0 Jan 29 15:41:42 crc kubenswrapper[4835]: I0129 15:41:42.885796 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" event={"ID":"d892ba7a-74a8-4d73-942f-0eb27888b77f","Type":"ContainerDied","Data":"e6d371ea5fc473d4265dfa3e45c89b8285f620b2d82129fe5c1a94c2b3b731df"} Jan 29 15:41:42 crc kubenswrapper[4835]: I0129 15:41:42.886161 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" event={"ID":"d892ba7a-74a8-4d73-942f-0eb27888b77f","Type":"ContainerStarted","Data":"6b5ecefc365f4a4cc7a21a5d8d31b6e566ef5b254536f045920cbcc83187d098"} Jan 29 15:41:43 crc kubenswrapper[4835]: E0129 15:41:43.016830 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:41:43 crc kubenswrapper[4835]: E0129 15:41:43.017019 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_openshift-marketplace(d892ba7a-74a8-4d73-942f-0eb27888b77f): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:41:43 crc kubenswrapper[4835]: E0129 15:41:43.018230 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.357158 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8g2rj"] Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.358223 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.371624 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8g2rj"] Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.482365 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-catalog-content\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.482439 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvpzp\" (UniqueName: \"kubernetes.io/projected/55a2b862-57f9-43ad-bdb8-f523d3c49f17-kube-api-access-xvpzp\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.482500 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-utilities\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.583248 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-catalog-content\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.583308 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvpzp\" (UniqueName: \"kubernetes.io/projected/55a2b862-57f9-43ad-bdb8-f523d3c49f17-kube-api-access-xvpzp\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.583347 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-utilities\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.583877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-utilities\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.583879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-catalog-content\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.608180 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvpzp\" (UniqueName: \"kubernetes.io/projected/55a2b862-57f9-43ad-bdb8-f523d3c49f17-kube-api-access-xvpzp\") pod \"redhat-operators-8g2rj\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.672612 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.858838 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8g2rj"] Jan 29 15:41:43 crc kubenswrapper[4835]: W0129 15:41:43.867897 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55a2b862_57f9_43ad_bdb8_f523d3c49f17.slice/crio-b062092ffdf5acb7918afd808e185456579d98591c1e4999b465268d7070287f WatchSource:0}: Error finding container b062092ffdf5acb7918afd808e185456579d98591c1e4999b465268d7070287f: Status 404 returned error can't find the container with id b062092ffdf5acb7918afd808e185456579d98591c1e4999b465268d7070287f Jan 29 15:41:43 crc kubenswrapper[4835]: I0129 15:41:43.891768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g2rj" event={"ID":"55a2b862-57f9-43ad-bdb8-f523d3c49f17","Type":"ContainerStarted","Data":"b062092ffdf5acb7918afd808e185456579d98591c1e4999b465268d7070287f"} Jan 29 15:41:43 crc kubenswrapper[4835]: E0129 15:41:43.893006 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:41:44 crc kubenswrapper[4835]: I0129 15:41:44.898801 4835 generic.go:334] "Generic (PLEG): container finished" podID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerID="ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c" exitCode=0 Jan 29 15:41:44 crc kubenswrapper[4835]: I0129 15:41:44.898842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g2rj" event={"ID":"55a2b862-57f9-43ad-bdb8-f523d3c49f17","Type":"ContainerDied","Data":"ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c"} Jan 29 15:41:45 crc kubenswrapper[4835]: E0129 15:41:45.023738 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:41:45 crc kubenswrapper[4835]: E0129 15:41:45.024197 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvpzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8g2rj_openshift-marketplace(55a2b862-57f9-43ad-bdb8-f523d3c49f17): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:41:45 crc kubenswrapper[4835]: E0129 15:41:45.026001 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:41:45 crc kubenswrapper[4835]: E0129 15:41:45.905349 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.614607 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.614900 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.614943 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.615518 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89fec8863a7ddbdcdb05e7552a62d6679e3dce4f50b4a33283b518f90565beb8"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.615569 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://89fec8863a7ddbdcdb05e7552a62d6679e3dce4f50b4a33283b518f90565beb8" gracePeriod=600 Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.949812 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="89fec8863a7ddbdcdb05e7552a62d6679e3dce4f50b4a33283b518f90565beb8" exitCode=0 Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.949897 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"89fec8863a7ddbdcdb05e7552a62d6679e3dce4f50b4a33283b518f90565beb8"} Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.950245 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"8062a46aa62900497fae594b4ff5d72f6325126057d30a9ae2096aef24638600"} Jan 29 15:41:51 crc kubenswrapper[4835]: I0129 15:41:51.950277 4835 scope.go:117] "RemoveContainer" containerID="960e02b9304e0850d4be3c3f7a1bf2be0c7557354cbe8ca7dadf98cff306bd9f" Jan 29 15:41:55 crc kubenswrapper[4835]: E0129 15:41:55.631035 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:41:55 crc kubenswrapper[4835]: E0129 15:41:55.631791 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_openshift-marketplace(d892ba7a-74a8-4d73-942f-0eb27888b77f): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:41:55 crc kubenswrapper[4835]: E0129 15:41:55.633051 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:41:57 crc kubenswrapper[4835]: E0129 15:41:57.613994 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:41:57 crc kubenswrapper[4835]: E0129 15:41:57.614491 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvpzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8g2rj_openshift-marketplace(55a2b862-57f9-43ad-bdb8-f523d3c49f17): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:41:57 crc kubenswrapper[4835]: E0129 15:41:57.615858 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:42:10 crc kubenswrapper[4835]: E0129 15:42:10.494038 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:42:10 crc kubenswrapper[4835]: E0129 15:42:10.495245 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:42:22 crc kubenswrapper[4835]: E0129 15:42:22.621359 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:42:22 crc kubenswrapper[4835]: E0129 15:42:22.622201 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_openshift-marketplace(d892ba7a-74a8-4d73-942f-0eb27888b77f): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:42:22 crc kubenswrapper[4835]: E0129 15:42:22.623877 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:42:24 crc kubenswrapper[4835]: E0129 15:42:24.660039 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:42:24 crc kubenswrapper[4835]: E0129 15:42:24.660325 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvpzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8g2rj_openshift-marketplace(55a2b862-57f9-43ad-bdb8-f523d3c49f17): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:42:24 crc kubenswrapper[4835]: E0129 15:42:24.661667 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:42:33 crc kubenswrapper[4835]: E0129 15:42:33.495755 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:42:38 crc kubenswrapper[4835]: E0129 15:42:38.496763 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:42:47 crc kubenswrapper[4835]: E0129 15:42:47.494696 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:42:50 crc kubenswrapper[4835]: E0129 15:42:50.496011 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:42:59 crc kubenswrapper[4835]: E0129 15:42:59.493991 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:43:03 crc kubenswrapper[4835]: E0129 15:43:03.493852 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:43:14 crc kubenswrapper[4835]: E0129 15:43:14.629421 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:43:14 crc kubenswrapper[4835]: E0129 15:43:14.630120 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_openshift-marketplace(d892ba7a-74a8-4d73-942f-0eb27888b77f): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:43:14 crc kubenswrapper[4835]: E0129 15:43:14.631359 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:43:14 crc kubenswrapper[4835]: E0129 15:43:14.655969 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:43:14 crc kubenswrapper[4835]: E0129 15:43:14.656109 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvpzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8g2rj_openshift-marketplace(55a2b862-57f9-43ad-bdb8-f523d3c49f17): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:43:14 crc kubenswrapper[4835]: E0129 15:43:14.657946 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:43:25 crc kubenswrapper[4835]: E0129 15:43:25.507526 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:43:26 crc kubenswrapper[4835]: E0129 15:43:26.493071 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:43:37 crc kubenswrapper[4835]: E0129 15:43:37.493905 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:43:38 crc kubenswrapper[4835]: E0129 15:43:38.500249 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.639706 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zmb"] Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.643581 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.651912 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zmb"] Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.794214 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-catalog-content\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.794301 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrkq8\" (UniqueName: \"kubernetes.io/projected/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-kube-api-access-zrkq8\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.794450 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-utilities\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.895887 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-utilities\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.895997 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-catalog-content\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.896048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrkq8\" (UniqueName: \"kubernetes.io/projected/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-kube-api-access-zrkq8\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.896548 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-utilities\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.896651 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-catalog-content\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.923922 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrkq8\" (UniqueName: \"kubernetes.io/projected/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-kube-api-access-zrkq8\") pod \"redhat-marketplace-x6zmb\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:50 crc kubenswrapper[4835]: I0129 15:43:50.963643 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:43:51 crc kubenswrapper[4835]: I0129 15:43:51.247594 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zmb"] Jan 29 15:43:51 crc kubenswrapper[4835]: I0129 15:43:51.610100 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zmb" event={"ID":"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4","Type":"ContainerStarted","Data":"4e575c4e8d607623ea6b39738db0f609b497852fc4b64aa0a1ef2999860d252e"} Jan 29 15:43:51 crc kubenswrapper[4835]: I0129 15:43:51.614903 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:43:51 crc kubenswrapper[4835]: I0129 15:43:51.614983 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:43:52 crc kubenswrapper[4835]: E0129 15:43:52.494165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:43:52 crc kubenswrapper[4835]: E0129 15:43:52.494654 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:43:52 crc kubenswrapper[4835]: I0129 15:43:52.619294 4835 generic.go:334] "Generic (PLEG): container finished" podID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerID="6b28d5eeca741cf1d3d6cea9b4d6bb96bfe384efb5fc75e27315e8efe5a18f6c" exitCode=0 Jan 29 15:43:52 crc kubenswrapper[4835]: I0129 15:43:52.619338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zmb" event={"ID":"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4","Type":"ContainerDied","Data":"6b28d5eeca741cf1d3d6cea9b4d6bb96bfe384efb5fc75e27315e8efe5a18f6c"} Jan 29 15:43:52 crc kubenswrapper[4835]: E0129 15:43:52.791936 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:43:52 crc kubenswrapper[4835]: E0129 15:43:52.793119 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x6zmb_openshift-marketplace(b35c1531-391d-4ce7-8e47-7dfd3c9cefd4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:43:52 crc kubenswrapper[4835]: E0129 15:43:52.794459 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:43:53 crc kubenswrapper[4835]: E0129 15:43:53.626534 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:44:03 crc kubenswrapper[4835]: E0129 15:44:03.496298 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:44:05 crc kubenswrapper[4835]: E0129 15:44:05.494547 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:44:08 crc kubenswrapper[4835]: E0129 15:44:08.618783 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:44:08 crc kubenswrapper[4835]: E0129 15:44:08.619297 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x6zmb_openshift-marketplace(b35c1531-391d-4ce7-8e47-7dfd3c9cefd4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:08 crc kubenswrapper[4835]: E0129 15:44:08.620398 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:44:14 crc kubenswrapper[4835]: E0129 15:44:14.493853 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:44:19 crc kubenswrapper[4835]: E0129 15:44:19.493301 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:44:20 crc kubenswrapper[4835]: E0129 15:44:20.495775 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:44:21 crc kubenswrapper[4835]: I0129 15:44:21.614409 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:21 crc kubenswrapper[4835]: I0129 15:44:21.614779 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:27 crc kubenswrapper[4835]: E0129 15:44:27.493888 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:44:33 crc kubenswrapper[4835]: E0129 15:44:33.493344 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:44:34 crc kubenswrapper[4835]: E0129 15:44:34.617283 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:44:34 crc kubenswrapper[4835]: E0129 15:44:34.617533 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x6zmb_openshift-marketplace(b35c1531-391d-4ce7-8e47-7dfd3c9cefd4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:34 crc kubenswrapper[4835]: E0129 15:44:34.619558 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:44:36 crc kubenswrapper[4835]: I0129 15:44:36.864826 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4225d"] Jan 29 15:44:36 crc kubenswrapper[4835]: I0129 15:44:36.867666 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:36 crc kubenswrapper[4835]: I0129 15:44:36.874033 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4225d"] Jan 29 15:44:36 crc kubenswrapper[4835]: I0129 15:44:36.974182 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-catalog-content\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:36 crc kubenswrapper[4835]: I0129 15:44:36.974447 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-utilities\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:36 crc kubenswrapper[4835]: I0129 15:44:36.974561 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97l88\" (UniqueName: \"kubernetes.io/projected/528287ca-7a32-4daf-ac69-32ad3ea4afd1-kube-api-access-97l88\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.076069 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-catalog-content\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.076401 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-utilities\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.076544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97l88\" (UniqueName: \"kubernetes.io/projected/528287ca-7a32-4daf-ac69-32ad3ea4afd1-kube-api-access-97l88\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.076656 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-catalog-content\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.077423 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-utilities\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.095814 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97l88\" (UniqueName: \"kubernetes.io/projected/528287ca-7a32-4daf-ac69-32ad3ea4afd1-kube-api-access-97l88\") pod \"community-operators-4225d\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.220932 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.457428 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4225d"] Jan 29 15:44:37 crc kubenswrapper[4835]: W0129 15:44:37.466702 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528287ca_7a32_4daf_ac69_32ad3ea4afd1.slice/crio-63b4c7af750370787d5d7c1fa725d955fb80a7b762fc68c8bb619ab857dab07b WatchSource:0}: Error finding container 63b4c7af750370787d5d7c1fa725d955fb80a7b762fc68c8bb619ab857dab07b: Status 404 returned error can't find the container with id 63b4c7af750370787d5d7c1fa725d955fb80a7b762fc68c8bb619ab857dab07b Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.875264 4835 generic.go:334] "Generic (PLEG): container finished" podID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerID="71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26" exitCode=0 Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.875327 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4225d" event={"ID":"528287ca-7a32-4daf-ac69-32ad3ea4afd1","Type":"ContainerDied","Data":"71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26"} Jan 29 15:44:37 crc kubenswrapper[4835]: I0129 15:44:37.875367 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4225d" event={"ID":"528287ca-7a32-4daf-ac69-32ad3ea4afd1","Type":"ContainerStarted","Data":"63b4c7af750370787d5d7c1fa725d955fb80a7b762fc68c8bb619ab857dab07b"} Jan 29 15:44:38 crc kubenswrapper[4835]: E0129 15:44:38.010848 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:38 crc kubenswrapper[4835]: E0129 15:44:38.011258 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97l88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4225d_openshift-marketplace(528287ca-7a32-4daf-ac69-32ad3ea4afd1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:38 crc kubenswrapper[4835]: E0129 15:44:38.012457 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:44:38 crc kubenswrapper[4835]: E0129 15:44:38.882265 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:44:40 crc kubenswrapper[4835]: E0129 15:44:40.625177 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14" Jan 29 15:44:40 crc kubenswrapper[4835]: E0129 15:44:40.625741 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qx9th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_openshift-marketplace(d892ba7a-74a8-4d73-942f-0eb27888b77f): ErrImagePull: initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:40 crc kubenswrapper[4835]: E0129 15:44:40.627454 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"initializing source docker://registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.645003 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bm6bl"] Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.646566 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.657369 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bm6bl"] Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.823779 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-catalog-content\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.823829 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-utilities\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.823860 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmcv6\" (UniqueName: \"kubernetes.io/projected/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-kube-api-access-bmcv6\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.925575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-catalog-content\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.925632 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-utilities\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.925668 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmcv6\" (UniqueName: \"kubernetes.io/projected/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-kube-api-access-bmcv6\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.926072 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-catalog-content\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.926157 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-utilities\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.947373 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmcv6\" (UniqueName: \"kubernetes.io/projected/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-kube-api-access-bmcv6\") pod \"certified-operators-bm6bl\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:47 crc kubenswrapper[4835]: I0129 15:44:47.964862 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:44:48 crc kubenswrapper[4835]: I0129 15:44:48.192946 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bm6bl"] Jan 29 15:44:48 crc kubenswrapper[4835]: W0129 15:44:48.206010 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd58f9cd_aab1_4e6c_96d1_e71cc49f8cf0.slice/crio-e335c3ae56ba3d8547c2ec63ace9b986d3d098e25e4ed43037b3249a94aec7b7 WatchSource:0}: Error finding container e335c3ae56ba3d8547c2ec63ace9b986d3d098e25e4ed43037b3249a94aec7b7: Status 404 returned error can't find the container with id e335c3ae56ba3d8547c2ec63ace9b986d3d098e25e4ed43037b3249a94aec7b7 Jan 29 15:44:48 crc kubenswrapper[4835]: E0129 15:44:48.629130 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:44:48 crc kubenswrapper[4835]: E0129 15:44:48.630104 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvpzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8g2rj_openshift-marketplace(55a2b862-57f9-43ad-bdb8-f523d3c49f17): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:48 crc kubenswrapper[4835]: E0129 15:44:48.631628 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:44:48 crc kubenswrapper[4835]: I0129 15:44:48.939866 4835 generic.go:334] "Generic (PLEG): container finished" podID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerID="e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10" exitCode=0 Jan 29 15:44:48 crc kubenswrapper[4835]: I0129 15:44:48.939911 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bm6bl" event={"ID":"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0","Type":"ContainerDied","Data":"e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10"} Jan 29 15:44:48 crc kubenswrapper[4835]: I0129 15:44:48.939940 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bm6bl" event={"ID":"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0","Type":"ContainerStarted","Data":"e335c3ae56ba3d8547c2ec63ace9b986d3d098e25e4ed43037b3249a94aec7b7"} Jan 29 15:44:49 crc kubenswrapper[4835]: E0129 15:44:49.093200 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:44:49 crc kubenswrapper[4835]: E0129 15:44:49.093339 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmcv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bm6bl_openshift-marketplace(cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:49 crc kubenswrapper[4835]: E0129 15:44:49.094524 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:44:49 crc kubenswrapper[4835]: E0129 15:44:49.496284 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:44:49 crc kubenswrapper[4835]: E0129 15:44:49.950179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.615191 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.615546 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.615591 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.616283 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8062a46aa62900497fae594b4ff5d72f6325126057d30a9ae2096aef24638600"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.616348 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://8062a46aa62900497fae594b4ff5d72f6325126057d30a9ae2096aef24638600" gracePeriod=600 Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.965134 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="8062a46aa62900497fae594b4ff5d72f6325126057d30a9ae2096aef24638600" exitCode=0 Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.965222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"8062a46aa62900497fae594b4ff5d72f6325126057d30a9ae2096aef24638600"} Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.965482 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b"} Jan 29 15:44:51 crc kubenswrapper[4835]: I0129 15:44:51.965505 4835 scope.go:117] "RemoveContainer" containerID="89fec8863a7ddbdcdb05e7552a62d6679e3dce4f50b4a33283b518f90565beb8" Jan 29 15:44:53 crc kubenswrapper[4835]: E0129 15:44:53.495132 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:44:53 crc kubenswrapper[4835]: E0129 15:44:53.617097 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:44:53 crc kubenswrapper[4835]: E0129 15:44:53.617833 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97l88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4225d_openshift-marketplace(528287ca-7a32-4daf-ac69-32ad3ea4afd1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:44:53 crc kubenswrapper[4835]: E0129 15:44:53.619297 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.140758 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82"] Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.142490 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.145648 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.145760 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.148595 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82"] Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.187015 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/905028c3-8935-4143-9c14-3550f391ca82-secret-volume\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.187146 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/905028c3-8935-4143-9c14-3550f391ca82-config-volume\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.187174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49jl\" (UniqueName: \"kubernetes.io/projected/905028c3-8935-4143-9c14-3550f391ca82-kube-api-access-p49jl\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.288064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/905028c3-8935-4143-9c14-3550f391ca82-config-volume\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.288142 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49jl\" (UniqueName: \"kubernetes.io/projected/905028c3-8935-4143-9c14-3550f391ca82-kube-api-access-p49jl\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.288178 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/905028c3-8935-4143-9c14-3550f391ca82-secret-volume\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.289218 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/905028c3-8935-4143-9c14-3550f391ca82-config-volume\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.298570 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/905028c3-8935-4143-9c14-3550f391ca82-secret-volume\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.303815 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49jl\" (UniqueName: \"kubernetes.io/projected/905028c3-8935-4143-9c14-3550f391ca82-kube-api-access-p49jl\") pod \"collect-profiles-29495025-kvn82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.467783 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:00 crc kubenswrapper[4835]: I0129 15:45:00.655956 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82"] Jan 29 15:45:01 crc kubenswrapper[4835]: I0129 15:45:01.045173 4835 generic.go:334] "Generic (PLEG): container finished" podID="905028c3-8935-4143-9c14-3550f391ca82" containerID="41d0c94c66b10e93048a7093d48f121ff3238c0f71e4726cae6e37bfb40953f0" exitCode=0 Jan 29 15:45:01 crc kubenswrapper[4835]: I0129 15:45:01.045219 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" event={"ID":"905028c3-8935-4143-9c14-3550f391ca82","Type":"ContainerDied","Data":"41d0c94c66b10e93048a7093d48f121ff3238c0f71e4726cae6e37bfb40953f0"} Jan 29 15:45:01 crc kubenswrapper[4835]: I0129 15:45:01.045256 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" event={"ID":"905028c3-8935-4143-9c14-3550f391ca82","Type":"ContainerStarted","Data":"156abc9720b3bbe45ed73e5091509bdc82160ab8c3c856a437bac82a44ad36f9"} Jan 29 15:45:01 crc kubenswrapper[4835]: E0129 15:45:01.494017 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:45:01 crc kubenswrapper[4835]: E0129 15:45:01.494076 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.227425 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.319269 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/905028c3-8935-4143-9c14-3550f391ca82-secret-volume\") pod \"905028c3-8935-4143-9c14-3550f391ca82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.319342 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/905028c3-8935-4143-9c14-3550f391ca82-config-volume\") pod \"905028c3-8935-4143-9c14-3550f391ca82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.319399 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49jl\" (UniqueName: \"kubernetes.io/projected/905028c3-8935-4143-9c14-3550f391ca82-kube-api-access-p49jl\") pod \"905028c3-8935-4143-9c14-3550f391ca82\" (UID: \"905028c3-8935-4143-9c14-3550f391ca82\") " Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.320041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/905028c3-8935-4143-9c14-3550f391ca82-config-volume" (OuterVolumeSpecName: "config-volume") pod "905028c3-8935-4143-9c14-3550f391ca82" (UID: "905028c3-8935-4143-9c14-3550f391ca82"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.324185 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905028c3-8935-4143-9c14-3550f391ca82-kube-api-access-p49jl" (OuterVolumeSpecName: "kube-api-access-p49jl") pod "905028c3-8935-4143-9c14-3550f391ca82" (UID: "905028c3-8935-4143-9c14-3550f391ca82"). InnerVolumeSpecName "kube-api-access-p49jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.324338 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905028c3-8935-4143-9c14-3550f391ca82-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "905028c3-8935-4143-9c14-3550f391ca82" (UID: "905028c3-8935-4143-9c14-3550f391ca82"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.420521 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/905028c3-8935-4143-9c14-3550f391ca82-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.420567 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49jl\" (UniqueName: \"kubernetes.io/projected/905028c3-8935-4143-9c14-3550f391ca82-kube-api-access-p49jl\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:02 crc kubenswrapper[4835]: I0129 15:45:02.420583 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/905028c3-8935-4143-9c14-3550f391ca82-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 15:45:02 crc kubenswrapper[4835]: E0129 15:45:02.626288 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:45:02 crc kubenswrapper[4835]: E0129 15:45:02.626440 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmcv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bm6bl_openshift-marketplace(cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:02 crc kubenswrapper[4835]: E0129 15:45:02.627668 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:45:03 crc kubenswrapper[4835]: I0129 15:45:03.060073 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" event={"ID":"905028c3-8935-4143-9c14-3550f391ca82","Type":"ContainerDied","Data":"156abc9720b3bbe45ed73e5091509bdc82160ab8c3c856a437bac82a44ad36f9"} Jan 29 15:45:03 crc kubenswrapper[4835]: I0129 15:45:03.060116 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156abc9720b3bbe45ed73e5091509bdc82160ab8c3c856a437bac82a44ad36f9" Jan 29 15:45:03 crc kubenswrapper[4835]: I0129 15:45:03.060190 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82" Jan 29 15:45:05 crc kubenswrapper[4835]: E0129 15:45:05.494211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:45:05 crc kubenswrapper[4835]: E0129 15:45:05.495174 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:45:13 crc kubenswrapper[4835]: E0129 15:45:13.495171 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:45:14 crc kubenswrapper[4835]: E0129 15:45:14.493718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:45:16 crc kubenswrapper[4835]: E0129 15:45:16.641212 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:45:16 crc kubenswrapper[4835]: E0129 15:45:16.641346 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x6zmb_openshift-marketplace(b35c1531-391d-4ce7-8e47-7dfd3c9cefd4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:16 crc kubenswrapper[4835]: E0129 15:45:16.642665 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:45:18 crc kubenswrapper[4835]: E0129 15:45:18.626757 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:45:18 crc kubenswrapper[4835]: E0129 15:45:18.627176 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97l88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4225d_openshift-marketplace(528287ca-7a32-4daf-ac69-32ad3ea4afd1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:18 crc kubenswrapper[4835]: E0129 15:45:18.628556 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:45:20 crc kubenswrapper[4835]: E0129 15:45:20.494349 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:45:24 crc kubenswrapper[4835]: E0129 15:45:24.627581 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:45:24 crc kubenswrapper[4835]: E0129 15:45:24.628003 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmcv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bm6bl_openshift-marketplace(cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:45:24 crc kubenswrapper[4835]: E0129 15:45:24.629165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:45:28 crc kubenswrapper[4835]: E0129 15:45:28.498102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:45:31 crc kubenswrapper[4835]: E0129 15:45:31.494585 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:45:32 crc kubenswrapper[4835]: E0129 15:45:32.493587 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:45:35 crc kubenswrapper[4835]: E0129 15:45:35.492997 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:45:36 crc kubenswrapper[4835]: E0129 15:45:36.493450 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:45:42 crc kubenswrapper[4835]: E0129 15:45:42.494616 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:45:43 crc kubenswrapper[4835]: E0129 15:45:43.493495 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:45:43 crc kubenswrapper[4835]: E0129 15:45:43.493530 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:45:49 crc kubenswrapper[4835]: E0129 15:45:49.493998 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:45:51 crc kubenswrapper[4835]: E0129 15:45:51.494215 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:45:53 crc kubenswrapper[4835]: E0129 15:45:53.494247 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:45:55 crc kubenswrapper[4835]: E0129 15:45:55.494501 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:45:58 crc kubenswrapper[4835]: E0129 15:45:58.501857 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:46:03 crc kubenswrapper[4835]: E0129 15:46:03.493365 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:46:04 crc kubenswrapper[4835]: E0129 15:46:04.494219 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:46:05 crc kubenswrapper[4835]: E0129 15:46:05.618782 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 15:46:05 crc kubenswrapper[4835]: E0129 15:46:05.618936 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmcv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bm6bl_openshift-marketplace(cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:46:05 crc kubenswrapper[4835]: E0129 15:46:05.620151 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:46:07 crc kubenswrapper[4835]: E0129 15:46:07.494150 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:46:10 crc kubenswrapper[4835]: E0129 15:46:10.619248 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 15:46:10 crc kubenswrapper[4835]: E0129 15:46:10.619685 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97l88,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4225d_openshift-marketplace(528287ca-7a32-4daf-ac69-32ad3ea4afd1): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:46:10 crc kubenswrapper[4835]: E0129 15:46:10.620897 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:46:14 crc kubenswrapper[4835]: E0129 15:46:14.494365 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:46:16 crc kubenswrapper[4835]: E0129 15:46:16.493623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:46:20 crc kubenswrapper[4835]: E0129 15:46:20.494347 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:46:22 crc kubenswrapper[4835]: E0129 15:46:22.711069 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:46:25 crc kubenswrapper[4835]: E0129 15:46:25.493762 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:46:27 crc kubenswrapper[4835]: E0129 15:46:27.494179 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:46:29 crc kubenswrapper[4835]: E0129 15:46:29.494022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:46:32 crc kubenswrapper[4835]: E0129 15:46:32.496382 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:46:33 crc kubenswrapper[4835]: E0129 15:46:33.493951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:46:39 crc kubenswrapper[4835]: E0129 15:46:39.494844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:46:40 crc kubenswrapper[4835]: E0129 15:46:40.493835 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:46:41 crc kubenswrapper[4835]: I0129 15:46:41.493031 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:46:41 crc kubenswrapper[4835]: E0129 15:46:41.625349 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:46:41 crc kubenswrapper[4835]: E0129 15:46:41.625527 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-x6zmb_openshift-marketplace(b35c1531-391d-4ce7-8e47-7dfd3c9cefd4): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:46:41 crc kubenswrapper[4835]: E0129 15:46:41.626788 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:46:43 crc kubenswrapper[4835]: E0129 15:46:43.494177 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:46:47 crc kubenswrapper[4835]: E0129 15:46:47.493783 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:46:51 crc kubenswrapper[4835]: I0129 15:46:51.614147 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:46:51 crc kubenswrapper[4835]: I0129 15:46:51.614512 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:46:52 crc kubenswrapper[4835]: E0129 15:46:52.493814 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:46:54 crc kubenswrapper[4835]: E0129 15:46:54.495435 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:46:55 crc kubenswrapper[4835]: E0129 15:46:55.495294 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:46:57 crc kubenswrapper[4835]: E0129 15:46:57.494230 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:46:58 crc kubenswrapper[4835]: E0129 15:46:58.497033 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:47:04 crc kubenswrapper[4835]: E0129 15:47:04.493859 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:47:07 crc kubenswrapper[4835]: E0129 15:47:07.494240 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:47:09 crc kubenswrapper[4835]: E0129 15:47:09.494577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:47:10 crc kubenswrapper[4835]: E0129 15:47:10.493400 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:47:11 crc kubenswrapper[4835]: E0129 15:47:11.492700 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:47:18 crc kubenswrapper[4835]: E0129 15:47:18.498763 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:47:19 crc kubenswrapper[4835]: E0129 15:47:19.495363 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:3a39f1fb02652308485b60f7dde40c291fc8a25e5e197bd89958a683c7800f14\\\"\"" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" Jan 29 15:47:20 crc kubenswrapper[4835]: E0129 15:47:20.494759 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:47:21 crc kubenswrapper[4835]: E0129 15:47:21.512764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" Jan 29 15:47:21 crc kubenswrapper[4835]: I0129 15:47:21.615069 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:47:21 crc kubenswrapper[4835]: I0129 15:47:21.615133 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:47:24 crc kubenswrapper[4835]: E0129 15:47:24.493557 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" Jan 29 15:47:29 crc kubenswrapper[4835]: E0129 15:47:29.494362 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" Jan 29 15:47:35 crc kubenswrapper[4835]: I0129 15:47:35.313029 4835 generic.go:334] "Generic (PLEG): container finished" podID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerID="3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3" exitCode=0 Jan 29 15:47:35 crc kubenswrapper[4835]: I0129 15:47:35.313119 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bm6bl" event={"ID":"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0","Type":"ContainerDied","Data":"3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3"} Jan 29 15:47:35 crc kubenswrapper[4835]: E0129 15:47:35.492687 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:47:36 crc kubenswrapper[4835]: I0129 15:47:36.321223 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bm6bl" event={"ID":"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0","Type":"ContainerStarted","Data":"7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1"} Jan 29 15:47:36 crc kubenswrapper[4835]: I0129 15:47:36.339101 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bm6bl" podStartSLOduration=2.568215365 podStartE2EDuration="2m49.339080223s" podCreationTimestamp="2026-01-29 15:44:47 +0000 UTC" firstStartedPulling="2026-01-29 15:44:48.942530563 +0000 UTC m=+991.135574237" lastFinishedPulling="2026-01-29 15:47:35.713395421 +0000 UTC m=+1157.906439095" observedRunningTime="2026-01-29 15:47:36.336347163 +0000 UTC m=+1158.529390817" watchObservedRunningTime="2026-01-29 15:47:36.339080223 +0000 UTC m=+1158.532123877" Jan 29 15:47:37 crc kubenswrapper[4835]: I0129 15:47:37.965245 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:47:37 crc kubenswrapper[4835]: I0129 15:47:37.965351 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:47:38 crc kubenswrapper[4835]: I0129 15:47:38.002725 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:47:38 crc kubenswrapper[4835]: I0129 15:47:38.332723 4835 generic.go:334] "Generic (PLEG): container finished" podID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerID="02a110a46e2a1ef84504f07f9bf10a9893d236bce62e96e11807bc824103af4d" exitCode=0 Jan 29 15:47:38 crc kubenswrapper[4835]: I0129 15:47:38.332816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" event={"ID":"d892ba7a-74a8-4d73-942f-0eb27888b77f","Type":"ContainerDied","Data":"02a110a46e2a1ef84504f07f9bf10a9893d236bce62e96e11807bc824103af4d"} Jan 29 15:47:39 crc kubenswrapper[4835]: I0129 15:47:39.340716 4835 generic.go:334] "Generic (PLEG): container finished" podID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerID="85b6063f83e4649c1be72df3f1046751e4af3a39e9df3f00f938dbcae50fd488" exitCode=0 Jan 29 15:47:39 crc kubenswrapper[4835]: I0129 15:47:39.340793 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" event={"ID":"d892ba7a-74a8-4d73-942f-0eb27888b77f","Type":"ContainerDied","Data":"85b6063f83e4649c1be72df3f1046751e4af3a39e9df3f00f938dbcae50fd488"} Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.347929 4835 generic.go:334] "Generic (PLEG): container finished" podID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerID="180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e" exitCode=0 Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.347988 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g2rj" event={"ID":"55a2b862-57f9-43ad-bdb8-f523d3c49f17","Type":"ContainerDied","Data":"180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e"} Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.580150 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.660868 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-util\") pod \"d892ba7a-74a8-4d73-942f-0eb27888b77f\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.660938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-bundle\") pod \"d892ba7a-74a8-4d73-942f-0eb27888b77f\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.661112 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx9th\" (UniqueName: \"kubernetes.io/projected/d892ba7a-74a8-4d73-942f-0eb27888b77f-kube-api-access-qx9th\") pod \"d892ba7a-74a8-4d73-942f-0eb27888b77f\" (UID: \"d892ba7a-74a8-4d73-942f-0eb27888b77f\") " Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.663616 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-bundle" (OuterVolumeSpecName: "bundle") pod "d892ba7a-74a8-4d73-942f-0eb27888b77f" (UID: "d892ba7a-74a8-4d73-942f-0eb27888b77f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.669502 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d892ba7a-74a8-4d73-942f-0eb27888b77f-kube-api-access-qx9th" (OuterVolumeSpecName: "kube-api-access-qx9th") pod "d892ba7a-74a8-4d73-942f-0eb27888b77f" (UID: "d892ba7a-74a8-4d73-942f-0eb27888b77f"). InnerVolumeSpecName "kube-api-access-qx9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.672800 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-util" (OuterVolumeSpecName: "util") pod "d892ba7a-74a8-4d73-942f-0eb27888b77f" (UID: "d892ba7a-74a8-4d73-942f-0eb27888b77f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.762922 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx9th\" (UniqueName: \"kubernetes.io/projected/d892ba7a-74a8-4d73-942f-0eb27888b77f-kube-api-access-qx9th\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.762967 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:40 crc kubenswrapper[4835]: I0129 15:47:40.762982 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d892ba7a-74a8-4d73-942f-0eb27888b77f-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:41 crc kubenswrapper[4835]: I0129 15:47:41.357493 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g2rj" event={"ID":"55a2b862-57f9-43ad-bdb8-f523d3c49f17","Type":"ContainerStarted","Data":"3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee"} Jan 29 15:47:41 crc kubenswrapper[4835]: I0129 15:47:41.359515 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" event={"ID":"d892ba7a-74a8-4d73-942f-0eb27888b77f","Type":"ContainerDied","Data":"6b5ecefc365f4a4cc7a21a5d8d31b6e566ef5b254536f045920cbcc83187d098"} Jan 29 15:47:41 crc kubenswrapper[4835]: I0129 15:47:41.359558 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5ecefc365f4a4cc7a21a5d8d31b6e566ef5b254536f045920cbcc83187d098" Jan 29 15:47:41 crc kubenswrapper[4835]: I0129 15:47:41.359627 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm" Jan 29 15:47:41 crc kubenswrapper[4835]: I0129 15:47:41.380884 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8g2rj" podStartSLOduration=2.4546163979999998 podStartE2EDuration="5m58.380854404s" podCreationTimestamp="2026-01-29 15:41:43 +0000 UTC" firstStartedPulling="2026-01-29 15:41:44.901049398 +0000 UTC m=+807.094093052" lastFinishedPulling="2026-01-29 15:47:40.827287394 +0000 UTC m=+1163.020331058" observedRunningTime="2026-01-29 15:47:41.37635955 +0000 UTC m=+1163.569403214" watchObservedRunningTime="2026-01-29 15:47:41.380854404 +0000 UTC m=+1163.573898118" Jan 29 15:47:43 crc kubenswrapper[4835]: I0129 15:47:43.372570 4835 generic.go:334] "Generic (PLEG): container finished" podID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerID="cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0" exitCode=0 Jan 29 15:47:43 crc kubenswrapper[4835]: I0129 15:47:43.373100 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4225d" event={"ID":"528287ca-7a32-4daf-ac69-32ad3ea4afd1","Type":"ContainerDied","Data":"cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0"} Jan 29 15:47:43 crc kubenswrapper[4835]: I0129 15:47:43.673896 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:47:43 crc kubenswrapper[4835]: I0129 15:47:43.673955 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.380056 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4225d" event={"ID":"528287ca-7a32-4daf-ac69-32ad3ea4afd1","Type":"ContainerStarted","Data":"66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1"} Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.382548 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8zbjq"] Jan 29 15:47:44 crc kubenswrapper[4835]: E0129 15:47:44.382823 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905028c3-8935-4143-9c14-3550f391ca82" containerName="collect-profiles" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.382841 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="905028c3-8935-4143-9c14-3550f391ca82" containerName="collect-profiles" Jan 29 15:47:44 crc kubenswrapper[4835]: E0129 15:47:44.382862 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="util" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.382871 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="util" Jan 29 15:47:44 crc kubenswrapper[4835]: E0129 15:47:44.382886 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="pull" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.382894 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="pull" Jan 29 15:47:44 crc kubenswrapper[4835]: E0129 15:47:44.382905 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="extract" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.382913 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="extract" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.383034 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d892ba7a-74a8-4d73-942f-0eb27888b77f" containerName="extract" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.383045 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="905028c3-8935-4143-9c14-3550f391ca82" containerName="collect-profiles" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.383489 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.385993 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.389424 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.395096 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zp65m" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.407995 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8zbjq"] Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.435267 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4225d" podStartSLOduration=2.508369555 podStartE2EDuration="3m8.435249303s" podCreationTimestamp="2026-01-29 15:44:36 +0000 UTC" firstStartedPulling="2026-01-29 15:44:37.87745477 +0000 UTC m=+980.070498464" lastFinishedPulling="2026-01-29 15:47:43.804334558 +0000 UTC m=+1165.997378212" observedRunningTime="2026-01-29 15:47:44.419546764 +0000 UTC m=+1166.612590458" watchObservedRunningTime="2026-01-29 15:47:44.435249303 +0000 UTC m=+1166.628292947" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.513171 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp88p\" (UniqueName: \"kubernetes.io/projected/3f78989b-0a38-49f8-b766-42dcb7742227-kube-api-access-hp88p\") pod \"nmstate-operator-646758c888-8zbjq\" (UID: \"3f78989b-0a38-49f8-b766-42dcb7742227\") " pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.613984 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp88p\" (UniqueName: \"kubernetes.io/projected/3f78989b-0a38-49f8-b766-42dcb7742227-kube-api-access-hp88p\") pod \"nmstate-operator-646758c888-8zbjq\" (UID: \"3f78989b-0a38-49f8-b766-42dcb7742227\") " pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.638518 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp88p\" (UniqueName: \"kubernetes.io/projected/3f78989b-0a38-49f8-b766-42dcb7742227-kube-api-access-hp88p\") pod \"nmstate-operator-646758c888-8zbjq\" (UID: \"3f78989b-0a38-49f8-b766-42dcb7742227\") " pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.702988 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.710734 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="registry-server" probeResult="failure" output=< Jan 29 15:47:44 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:47:44 crc kubenswrapper[4835]: > Jan 29 15:47:44 crc kubenswrapper[4835]: I0129 15:47:44.950757 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8zbjq"] Jan 29 15:47:44 crc kubenswrapper[4835]: W0129 15:47:44.954634 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f78989b_0a38_49f8_b766_42dcb7742227.slice/crio-47896e104ce85756a0753a8286dc277e16097c8ae30b17e029fa62d860ad1008 WatchSource:0}: Error finding container 47896e104ce85756a0753a8286dc277e16097c8ae30b17e029fa62d860ad1008: Status 404 returned error can't find the container with id 47896e104ce85756a0753a8286dc277e16097c8ae30b17e029fa62d860ad1008 Jan 29 15:47:45 crc kubenswrapper[4835]: I0129 15:47:45.385510 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" event={"ID":"3f78989b-0a38-49f8-b766-42dcb7742227","Type":"ContainerStarted","Data":"47896e104ce85756a0753a8286dc277e16097c8ae30b17e029fa62d860ad1008"} Jan 29 15:47:47 crc kubenswrapper[4835]: I0129 15:47:47.221058 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:47:47 crc kubenswrapper[4835]: I0129 15:47:47.221394 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:47:47 crc kubenswrapper[4835]: I0129 15:47:47.257120 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:47:47 crc kubenswrapper[4835]: E0129 15:47:47.494577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:47:48 crc kubenswrapper[4835]: I0129 15:47:48.008852 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:47:51 crc kubenswrapper[4835]: I0129 15:47:51.614881 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:47:51 crc kubenswrapper[4835]: I0129 15:47:51.615327 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:47:51 crc kubenswrapper[4835]: I0129 15:47:51.615389 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:47:51 crc kubenswrapper[4835]: I0129 15:47:51.616297 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:47:51 crc kubenswrapper[4835]: I0129 15:47:51.616395 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b" gracePeriod=600 Jan 29 15:47:52 crc kubenswrapper[4835]: E0129 15:47:52.417005 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82353ab9_3930_4794_8182_a1e0948778df.slice/crio-conmon-e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:47:52 crc kubenswrapper[4835]: I0129 15:47:52.441522 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b" exitCode=0 Jan 29 15:47:52 crc kubenswrapper[4835]: I0129 15:47:52.441575 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b"} Jan 29 15:47:52 crc kubenswrapper[4835]: I0129 15:47:52.441618 4835 scope.go:117] "RemoveContainer" containerID="8062a46aa62900497fae594b4ff5d72f6325126057d30a9ae2096aef24638600" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.010789 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bm6bl"] Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.011317 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bm6bl" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="registry-server" containerID="cri-o://7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1" gracePeriod=2 Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.433605 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.455363 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"4b0dd5999a96771f391d295e38b4e02038c721d5806f4dd1294274ca9a5198a3"} Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.457383 4835 generic.go:334] "Generic (PLEG): container finished" podID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerID="7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1" exitCode=0 Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.457412 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bm6bl" event={"ID":"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0","Type":"ContainerDied","Data":"7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1"} Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.457427 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bm6bl" event={"ID":"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0","Type":"ContainerDied","Data":"e335c3ae56ba3d8547c2ec63ace9b986d3d098e25e4ed43037b3249a94aec7b7"} Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.457443 4835 scope.go:117] "RemoveContainer" containerID="7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.457522 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bm6bl" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.530117 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-utilities\") pod \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.530216 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-catalog-content\") pod \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.530303 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmcv6\" (UniqueName: \"kubernetes.io/projected/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-kube-api-access-bmcv6\") pod \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\" (UID: \"cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0\") " Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.532587 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-utilities" (OuterVolumeSpecName: "utilities") pod "cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" (UID: "cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.536607 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-kube-api-access-bmcv6" (OuterVolumeSpecName: "kube-api-access-bmcv6") pod "cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" (UID: "cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0"). InnerVolumeSpecName "kube-api-access-bmcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.583046 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" (UID: "cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.632732 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.632787 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.632809 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmcv6\" (UniqueName: \"kubernetes.io/projected/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0-kube-api-access-bmcv6\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.634253 4835 scope.go:117] "RemoveContainer" containerID="3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.661447 4835 scope.go:117] "RemoveContainer" containerID="e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.687895 4835 scope.go:117] "RemoveContainer" containerID="7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1" Jan 29 15:47:53 crc kubenswrapper[4835]: E0129 15:47:53.688523 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1\": container with ID starting with 7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1 not found: ID does not exist" containerID="7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.688579 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1"} err="failed to get container status \"7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1\": rpc error: code = NotFound desc = could not find container \"7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1\": container with ID starting with 7319836f4d3e2ae9c4cc6d2083e374ad8d284ad493f3dce7804c4a4e75e0e0e1 not found: ID does not exist" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.688614 4835 scope.go:117] "RemoveContainer" containerID="3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3" Jan 29 15:47:53 crc kubenswrapper[4835]: E0129 15:47:53.688924 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3\": container with ID starting with 3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3 not found: ID does not exist" containerID="3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.688959 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3"} err="failed to get container status \"3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3\": rpc error: code = NotFound desc = could not find container \"3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3\": container with ID starting with 3b102023e9c35f7d51db8d304f0e0c00cdfcad5ac5b84195c45ce3a3a75535e3 not found: ID does not exist" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.688978 4835 scope.go:117] "RemoveContainer" containerID="e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10" Jan 29 15:47:53 crc kubenswrapper[4835]: E0129 15:47:53.689198 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10\": container with ID starting with e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10 not found: ID does not exist" containerID="e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.689223 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10"} err="failed to get container status \"e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10\": rpc error: code = NotFound desc = could not find container \"e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10\": container with ID starting with e805045cabab8df996e965135cb8b4986d20ca994a1e5c3e4022a42bcfa40d10 not found: ID does not exist" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.720353 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.767814 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.802941 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bm6bl"] Jan 29 15:47:53 crc kubenswrapper[4835]: I0129 15:47:53.807099 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bm6bl"] Jan 29 15:47:54 crc kubenswrapper[4835]: I0129 15:47:54.468076 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" event={"ID":"3f78989b-0a38-49f8-b766-42dcb7742227","Type":"ContainerStarted","Data":"ac18d396df3fdee6cf36766cdb3b3ee1e3e678b6af0b6b0b0614efc92e90a738"} Jan 29 15:47:54 crc kubenswrapper[4835]: I0129 15:47:54.499119 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-8zbjq" podStartSLOduration=1.811705057 podStartE2EDuration="10.499097194s" podCreationTimestamp="2026-01-29 15:47:44 +0000 UTC" firstStartedPulling="2026-01-29 15:47:44.956239886 +0000 UTC m=+1167.149283540" lastFinishedPulling="2026-01-29 15:47:53.643631993 +0000 UTC m=+1175.836675677" observedRunningTime="2026-01-29 15:47:54.492938265 +0000 UTC m=+1176.685981999" watchObservedRunningTime="2026-01-29 15:47:54.499097194 +0000 UTC m=+1176.692140858" Jan 29 15:47:54 crc kubenswrapper[4835]: I0129 15:47:54.509427 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" path="/var/lib/kubelet/pods/cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0/volumes" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.384828 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vtqfs"] Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.385041 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="extract-utilities" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.385054 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="extract-utilities" Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.385069 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="registry-server" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.385075 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="registry-server" Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.385088 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="extract-content" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.385098 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="extract-content" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.385203 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd58f9cd-aab1-4e6c-96d1-e71cc49f8cf0" containerName="registry-server" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.385817 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.388733 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5g7zr" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.403950 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vtqfs"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.436809 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.437580 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.440977 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-l8zn7"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.441376 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.441698 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.454031 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4vh\" (UniqueName: \"kubernetes.io/projected/04026234-d256-48f0-a82f-bbc23b7439d4-kube-api-access-kq4vh\") pod \"nmstate-metrics-54757c584b-vtqfs\" (UID: \"04026234-d256-48f0-a82f-bbc23b7439d4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.521237 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555072 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-dbus-socket\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555168 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47czr\" (UniqueName: \"kubernetes.io/projected/38b5bcee-28f4-4710-b03b-afc61a857751-kube-api-access-47czr\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555216 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfq4f\" (UniqueName: \"kubernetes.io/projected/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-kube-api-access-cfq4f\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555268 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555300 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4vh\" (UniqueName: \"kubernetes.io/projected/04026234-d256-48f0-a82f-bbc23b7439d4-kube-api-access-kq4vh\") pod \"nmstate-metrics-54757c584b-vtqfs\" (UID: \"04026234-d256-48f0-a82f-bbc23b7439d4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555320 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-nmstate-lock\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.555351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-ovs-socket\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.578052 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4vh\" (UniqueName: \"kubernetes.io/projected/04026234-d256-48f0-a82f-bbc23b7439d4-kube-api-access-kq4vh\") pod \"nmstate-metrics-54757c584b-vtqfs\" (UID: \"04026234-d256-48f0-a82f-bbc23b7439d4\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.619006 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.619821 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.621627 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-7mftt" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.621953 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.622241 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.628361 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656403 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47czr\" (UniqueName: \"kubernetes.io/projected/38b5bcee-28f4-4710-b03b-afc61a857751-kube-api-access-47czr\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656456 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfq4f\" (UniqueName: \"kubernetes.io/projected/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-kube-api-access-cfq4f\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656505 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656536 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656559 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656584 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-nmstate-lock\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-ovs-socket\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656620 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5kgg\" (UniqueName: \"kubernetes.io/projected/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-kube-api-access-v5kgg\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656655 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-dbus-socket\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.656933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-dbus-socket\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.657107 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-ovs-socket\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.657218 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/38b5bcee-28f4-4710-b03b-afc61a857751-nmstate-lock\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.657234 4835 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.657490 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-tls-key-pair podName:efdfb4e0-5ef9-4bba-80eb-5ce49b592676 nodeName:}" failed. No retries permitted until 2026-01-29 15:47:56.157446721 +0000 UTC m=+1178.350490395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-ltd82" (UID: "efdfb4e0-5ef9-4bba-80eb-5ce49b592676") : secret "openshift-nmstate-webhook" not found Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.675963 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47czr\" (UniqueName: \"kubernetes.io/projected/38b5bcee-28f4-4710-b03b-afc61a857751-kube-api-access-47czr\") pod \"nmstate-handler-l8zn7\" (UID: \"38b5bcee-28f4-4710-b03b-afc61a857751\") " pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.678084 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfq4f\" (UniqueName: \"kubernetes.io/projected/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-kube-api-access-cfq4f\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.699577 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.757870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.757954 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.757990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5kgg\" (UniqueName: \"kubernetes.io/projected/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-kube-api-access-v5kgg\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.758403 4835 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 15:47:55 crc kubenswrapper[4835]: E0129 15:47:55.758461 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-plugin-serving-cert podName:92a62ad2-e26b-4cfa-b7aa-3d2818800cca nodeName:}" failed. No retries permitted until 2026-01-29 15:47:56.258444064 +0000 UTC m=+1178.451487718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-2lh69" (UID: "92a62ad2-e26b-4cfa-b7aa-3d2818800cca") : secret "plugin-serving-cert" not found Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.759636 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.762806 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.778421 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5kgg\" (UniqueName: \"kubernetes.io/projected/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-kube-api-access-v5kgg\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.874503 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-57b8dd994f-8bws7"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.875850 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.910421 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57b8dd994f-8bws7"] Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961094 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-config\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-oauth-serving-cert\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961186 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-serving-cert\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961202 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-service-ca\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961219 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-oauth-config\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961250 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-trusted-ca-bundle\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:55 crc kubenswrapper[4835]: I0129 15:47:55.961273 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75bd\" (UniqueName: \"kubernetes.io/projected/0a6a603e-7b3f-465c-bd1e-79e06306ff46-kube-api-access-z75bd\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062017 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-oauth-serving-cert\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062058 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-serving-cert\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062075 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-service-ca\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062111 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-oauth-config\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062153 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-trusted-ca-bundle\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062184 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z75bd\" (UniqueName: \"kubernetes.io/projected/0a6a603e-7b3f-465c-bd1e-79e06306ff46-kube-api-access-z75bd\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.062238 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-config\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.063173 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-config\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.063647 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-service-ca\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.064169 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-oauth-serving-cert\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.064432 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a6a603e-7b3f-465c-bd1e-79e06306ff46-trusted-ca-bundle\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.066027 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-serving-cert\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.066480 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a6a603e-7b3f-465c-bd1e-79e06306ff46-console-oauth-config\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.084372 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z75bd\" (UniqueName: \"kubernetes.io/projected/0a6a603e-7b3f-465c-bd1e-79e06306ff46-kube-api-access-z75bd\") pod \"console-57b8dd994f-8bws7\" (UID: \"0a6a603e-7b3f-465c-bd1e-79e06306ff46\") " pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.152396 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vtqfs"] Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.163561 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.166328 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efdfb4e0-5ef9-4bba-80eb-5ce49b592676-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ltd82\" (UID: \"efdfb4e0-5ef9-4bba-80eb-5ce49b592676\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.209144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.265091 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.268966 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/92a62ad2-e26b-4cfa-b7aa-3d2818800cca-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-2lh69\" (UID: \"92a62ad2-e26b-4cfa-b7aa-3d2818800cca\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.352493 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.403922 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-57b8dd994f-8bws7"] Jan 29 15:47:56 crc kubenswrapper[4835]: W0129 15:47:56.408749 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6a603e_7b3f_465c_bd1e_79e06306ff46.slice/crio-323ddb7ba8e611c544b066d55996ce52c948125b641128409fd4f1220a5f9ba7 WatchSource:0}: Error finding container 323ddb7ba8e611c544b066d55996ce52c948125b641128409fd4f1220a5f9ba7: Status 404 returned error can't find the container with id 323ddb7ba8e611c544b066d55996ce52c948125b641128409fd4f1220a5f9ba7 Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.481623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l8zn7" event={"ID":"38b5bcee-28f4-4710-b03b-afc61a857751","Type":"ContainerStarted","Data":"8ae1d821a5becc2c37e2e1eac03afc1b67dd52949ed14afe72d573c1e1af772f"} Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.483885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57b8dd994f-8bws7" event={"ID":"0a6a603e-7b3f-465c-bd1e-79e06306ff46","Type":"ContainerStarted","Data":"323ddb7ba8e611c544b066d55996ce52c948125b641128409fd4f1220a5f9ba7"} Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.484735 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" event={"ID":"04026234-d256-48f0-a82f-bbc23b7439d4","Type":"ContainerStarted","Data":"d1159abc9b083dfef821a973dd34c710b3a60c752ee5381958762c315f9e0e6d"} Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.529714 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82"] Jan 29 15:47:56 crc kubenswrapper[4835]: W0129 15:47:56.535671 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefdfb4e0_5ef9_4bba_80eb_5ce49b592676.slice/crio-a52da061ded0eae19898b8a4df46453e21590936eb03757b21c73abc19fd7232 WatchSource:0}: Error finding container a52da061ded0eae19898b8a4df46453e21590936eb03757b21c73abc19fd7232: Status 404 returned error can't find the container with id a52da061ded0eae19898b8a4df46453e21590936eb03757b21c73abc19fd7232 Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.541751 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" Jan 29 15:47:56 crc kubenswrapper[4835]: I0129 15:47:56.718747 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69"] Jan 29 15:47:57 crc kubenswrapper[4835]: I0129 15:47:57.268278 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:47:57 crc kubenswrapper[4835]: I0129 15:47:57.490847 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" event={"ID":"92a62ad2-e26b-4cfa-b7aa-3d2818800cca","Type":"ContainerStarted","Data":"2678376e16c49c38a55340c39c50d2269b1190a23662211ab906388d46f2c574"} Jan 29 15:47:57 crc kubenswrapper[4835]: I0129 15:47:57.492221 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-57b8dd994f-8bws7" event={"ID":"0a6a603e-7b3f-465c-bd1e-79e06306ff46","Type":"ContainerStarted","Data":"f3df2f2138d3efa4cbf1a5664cccfda63bf2ca01a7c233dc2355524b46d18243"} Jan 29 15:47:57 crc kubenswrapper[4835]: I0129 15:47:57.493091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" event={"ID":"efdfb4e0-5ef9-4bba-80eb-5ce49b592676","Type":"ContainerStarted","Data":"a52da061ded0eae19898b8a4df46453e21590936eb03757b21c73abc19fd7232"} Jan 29 15:47:57 crc kubenswrapper[4835]: I0129 15:47:57.513178 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-57b8dd994f-8bws7" podStartSLOduration=2.513144354 podStartE2EDuration="2.513144354s" podCreationTimestamp="2026-01-29 15:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:47:57.509198783 +0000 UTC m=+1179.702242437" watchObservedRunningTime="2026-01-29 15:47:57.513144354 +0000 UTC m=+1179.706188028" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.010325 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8g2rj"] Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.010962 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8g2rj" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="registry-server" containerID="cri-o://3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee" gracePeriod=2 Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.471185 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.494405 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-utilities\") pod \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.494536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvpzp\" (UniqueName: \"kubernetes.io/projected/55a2b862-57f9-43ad-bdb8-f523d3c49f17-kube-api-access-xvpzp\") pod \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.494655 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-catalog-content\") pod \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\" (UID: \"55a2b862-57f9-43ad-bdb8-f523d3c49f17\") " Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.495344 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-utilities" (OuterVolumeSpecName: "utilities") pod "55a2b862-57f9-43ad-bdb8-f523d3c49f17" (UID: "55a2b862-57f9-43ad-bdb8-f523d3c49f17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.500611 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a2b862-57f9-43ad-bdb8-f523d3c49f17-kube-api-access-xvpzp" (OuterVolumeSpecName: "kube-api-access-xvpzp") pod "55a2b862-57f9-43ad-bdb8-f523d3c49f17" (UID: "55a2b862-57f9-43ad-bdb8-f523d3c49f17"). InnerVolumeSpecName "kube-api-access-xvpzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.507766 4835 generic.go:334] "Generic (PLEG): container finished" podID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerID="3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee" exitCode=0 Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.507951 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8g2rj" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.520007 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g2rj" event={"ID":"55a2b862-57f9-43ad-bdb8-f523d3c49f17","Type":"ContainerDied","Data":"3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee"} Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.520049 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8g2rj" event={"ID":"55a2b862-57f9-43ad-bdb8-f523d3c49f17","Type":"ContainerDied","Data":"b062092ffdf5acb7918afd808e185456579d98591c1e4999b465268d7070287f"} Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.520071 4835 scope.go:117] "RemoveContainer" containerID="3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.551479 4835 scope.go:117] "RemoveContainer" containerID="180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.566722 4835 scope.go:117] "RemoveContainer" containerID="ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.589704 4835 scope.go:117] "RemoveContainer" containerID="3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee" Jan 29 15:47:58 crc kubenswrapper[4835]: E0129 15:47:58.590272 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee\": container with ID starting with 3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee not found: ID does not exist" containerID="3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.590307 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee"} err="failed to get container status \"3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee\": rpc error: code = NotFound desc = could not find container \"3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee\": container with ID starting with 3b6565d5a82b98f9bf0da2d4d4c3749b10cd8014fac0443faf7856f94c7aedee not found: ID does not exist" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.590333 4835 scope.go:117] "RemoveContainer" containerID="180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e" Jan 29 15:47:58 crc kubenswrapper[4835]: E0129 15:47:58.590691 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e\": container with ID starting with 180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e not found: ID does not exist" containerID="180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.590732 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e"} err="failed to get container status \"180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e\": rpc error: code = NotFound desc = could not find container \"180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e\": container with ID starting with 180fab0bee2017faf772b00e9ccfbea6edcd3286dac0ed7f2142cd779f0d1b1e not found: ID does not exist" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.590761 4835 scope.go:117] "RemoveContainer" containerID="ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c" Jan 29 15:47:58 crc kubenswrapper[4835]: E0129 15:47:58.591256 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c\": container with ID starting with ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c not found: ID does not exist" containerID="ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.591310 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c"} err="failed to get container status \"ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c\": rpc error: code = NotFound desc = could not find container \"ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c\": container with ID starting with ca58cd56baf5c38629976fbfd7b7008a56ca0608f345577af09f55b66766811c not found: ID does not exist" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.596162 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.596203 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvpzp\" (UniqueName: \"kubernetes.io/projected/55a2b862-57f9-43ad-bdb8-f523d3c49f17-kube-api-access-xvpzp\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.619478 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55a2b862-57f9-43ad-bdb8-f523d3c49f17" (UID: "55a2b862-57f9-43ad-bdb8-f523d3c49f17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.698120 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a2b862-57f9-43ad-bdb8-f523d3c49f17-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.848230 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8g2rj"] Jan 29 15:47:58 crc kubenswrapper[4835]: I0129 15:47:58.852899 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8g2rj"] Jan 29 15:47:59 crc kubenswrapper[4835]: I0129 15:47:59.515505 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" event={"ID":"04026234-d256-48f0-a82f-bbc23b7439d4","Type":"ContainerStarted","Data":"feca8f9eb86fd0120015744d46b52436fe3af994dc17b43543cb63a759ad5090"} Jan 29 15:47:59 crc kubenswrapper[4835]: I0129 15:47:59.520085 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l8zn7" event={"ID":"38b5bcee-28f4-4710-b03b-afc61a857751","Type":"ContainerStarted","Data":"716e9e222bba05b46682dee77c54cb1554170e285b7f203a8d805f24db03af03"} Jan 29 15:47:59 crc kubenswrapper[4835]: I0129 15:47:59.520218 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:47:59 crc kubenswrapper[4835]: I0129 15:47:59.537498 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-l8zn7" podStartSLOduration=1.4197091880000001 podStartE2EDuration="4.537476433s" podCreationTimestamp="2026-01-29 15:47:55 +0000 UTC" firstStartedPulling="2026-01-29 15:47:55.857953019 +0000 UTC m=+1178.050996673" lastFinishedPulling="2026-01-29 15:47:58.975720264 +0000 UTC m=+1181.168763918" observedRunningTime="2026-01-29 15:47:59.533294766 +0000 UTC m=+1181.726338420" watchObservedRunningTime="2026-01-29 15:47:59.537476433 +0000 UTC m=+1181.730520087" Jan 29 15:48:00 crc kubenswrapper[4835]: E0129 15:48:00.495064 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:48:00 crc kubenswrapper[4835]: I0129 15:48:00.499130 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" path="/var/lib/kubelet/pods/55a2b862-57f9-43ad-bdb8-f523d3c49f17/volumes" Jan 29 15:48:00 crc kubenswrapper[4835]: I0129 15:48:00.527629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" event={"ID":"efdfb4e0-5ef9-4bba-80eb-5ce49b592676","Type":"ContainerStarted","Data":"ef4508662a063371ced84acd0c32af87c2eb03a83dba896a5c60d3d5c6bd73ab"} Jan 29 15:48:00 crc kubenswrapper[4835]: I0129 15:48:00.527685 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:48:00 crc kubenswrapper[4835]: I0129 15:48:00.554159 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" podStartSLOduration=2.828128413 podStartE2EDuration="5.554137419s" podCreationTimestamp="2026-01-29 15:47:55 +0000 UTC" firstStartedPulling="2026-01-29 15:47:56.538655685 +0000 UTC m=+1178.731699339" lastFinishedPulling="2026-01-29 15:47:59.264664691 +0000 UTC m=+1181.457708345" observedRunningTime="2026-01-29 15:48:00.546847051 +0000 UTC m=+1182.739890715" watchObservedRunningTime="2026-01-29 15:48:00.554137419 +0000 UTC m=+1182.747181073" Jan 29 15:48:01 crc kubenswrapper[4835]: I0129 15:48:01.607201 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4225d"] Jan 29 15:48:01 crc kubenswrapper[4835]: I0129 15:48:01.607522 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4225d" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="registry-server" containerID="cri-o://66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1" gracePeriod=2 Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.383605 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.537772 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" event={"ID":"92a62ad2-e26b-4cfa-b7aa-3d2818800cca","Type":"ContainerStarted","Data":"93af93bc676df94927b351cb4bb6f581a8f406dd16b624b86c5115643b95f60d"} Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.541130 4835 generic.go:334] "Generic (PLEG): container finished" podID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerID="66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1" exitCode=0 Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.541213 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4225d" event={"ID":"528287ca-7a32-4daf-ac69-32ad3ea4afd1","Type":"ContainerDied","Data":"66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1"} Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.541249 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4225d" event={"ID":"528287ca-7a32-4daf-ac69-32ad3ea4afd1","Type":"ContainerDied","Data":"63b4c7af750370787d5d7c1fa725d955fb80a7b762fc68c8bb619ab857dab07b"} Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.541265 4835 scope.go:117] "RemoveContainer" containerID="66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.541226 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4225d" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.552951 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-2lh69" podStartSLOduration=2.099246285 podStartE2EDuration="7.552935491s" podCreationTimestamp="2026-01-29 15:47:55 +0000 UTC" firstStartedPulling="2026-01-29 15:47:56.733673782 +0000 UTC m=+1178.926717436" lastFinishedPulling="2026-01-29 15:48:02.187362988 +0000 UTC m=+1184.380406642" observedRunningTime="2026-01-29 15:48:02.551348 +0000 UTC m=+1184.744391654" watchObservedRunningTime="2026-01-29 15:48:02.552935491 +0000 UTC m=+1184.745979145" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.554905 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97l88\" (UniqueName: \"kubernetes.io/projected/528287ca-7a32-4daf-ac69-32ad3ea4afd1-kube-api-access-97l88\") pod \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.554982 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-catalog-content\") pod \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.555030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-utilities\") pod \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\" (UID: \"528287ca-7a32-4daf-ac69-32ad3ea4afd1\") " Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.557527 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-utilities" (OuterVolumeSpecName: "utilities") pod "528287ca-7a32-4daf-ac69-32ad3ea4afd1" (UID: "528287ca-7a32-4daf-ac69-32ad3ea4afd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.560167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528287ca-7a32-4daf-ac69-32ad3ea4afd1-kube-api-access-97l88" (OuterVolumeSpecName: "kube-api-access-97l88") pod "528287ca-7a32-4daf-ac69-32ad3ea4afd1" (UID: "528287ca-7a32-4daf-ac69-32ad3ea4afd1"). InnerVolumeSpecName "kube-api-access-97l88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.571024 4835 scope.go:117] "RemoveContainer" containerID="cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.590205 4835 scope.go:117] "RemoveContainer" containerID="71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.609066 4835 scope.go:117] "RemoveContainer" containerID="66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1" Jan 29 15:48:02 crc kubenswrapper[4835]: E0129 15:48:02.609879 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1\": container with ID starting with 66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1 not found: ID does not exist" containerID="66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.609909 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1"} err="failed to get container status \"66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1\": rpc error: code = NotFound desc = could not find container \"66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1\": container with ID starting with 66435b1a798532d4d64848f139d5853093f545247f136555feda6380d9396ed1 not found: ID does not exist" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.609930 4835 scope.go:117] "RemoveContainer" containerID="cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0" Jan 29 15:48:02 crc kubenswrapper[4835]: E0129 15:48:02.610393 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0\": container with ID starting with cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0 not found: ID does not exist" containerID="cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.610530 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0"} err="failed to get container status \"cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0\": rpc error: code = NotFound desc = could not find container \"cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0\": container with ID starting with cd55250db28ad171f0d4aca96c455282ea7c8e75d68f913f2bab4269c3e442c0 not found: ID does not exist" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.610567 4835 scope.go:117] "RemoveContainer" containerID="71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26" Jan 29 15:48:02 crc kubenswrapper[4835]: E0129 15:48:02.610895 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26\": container with ID starting with 71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26 not found: ID does not exist" containerID="71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.610944 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26"} err="failed to get container status \"71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26\": rpc error: code = NotFound desc = could not find container \"71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26\": container with ID starting with 71e9372594b6499af669fb13d3135755b970d238d492ac0892fcc704157c9b26 not found: ID does not exist" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.622582 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "528287ca-7a32-4daf-ac69-32ad3ea4afd1" (UID: "528287ca-7a32-4daf-ac69-32ad3ea4afd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.657650 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97l88\" (UniqueName: \"kubernetes.io/projected/528287ca-7a32-4daf-ac69-32ad3ea4afd1-kube-api-access-97l88\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.657688 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.657699 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/528287ca-7a32-4daf-ac69-32ad3ea4afd1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.870217 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4225d"] Jan 29 15:48:02 crc kubenswrapper[4835]: I0129 15:48:02.876297 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4225d"] Jan 29 15:48:04 crc kubenswrapper[4835]: I0129 15:48:04.501202 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" path="/var/lib/kubelet/pods/528287ca-7a32-4daf-ac69-32ad3ea4afd1/volumes" Jan 29 15:48:04 crc kubenswrapper[4835]: I0129 15:48:04.557222 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" event={"ID":"04026234-d256-48f0-a82f-bbc23b7439d4","Type":"ContainerStarted","Data":"10fbfde4ac7e78433cc0fa8af69a6266663d5be8b1a5b33f32f43b4903c7f4e2"} Jan 29 15:48:04 crc kubenswrapper[4835]: I0129 15:48:04.580621 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vtqfs" podStartSLOduration=2.159733305 podStartE2EDuration="9.580595336s" podCreationTimestamp="2026-01-29 15:47:55 +0000 UTC" firstStartedPulling="2026-01-29 15:47:56.159873592 +0000 UTC m=+1178.352917246" lastFinishedPulling="2026-01-29 15:48:03.580735623 +0000 UTC m=+1185.773779277" observedRunningTime="2026-01-29 15:48:04.57571459 +0000 UTC m=+1186.768758264" watchObservedRunningTime="2026-01-29 15:48:04.580595336 +0000 UTC m=+1186.773639000" Jan 29 15:48:05 crc kubenswrapper[4835]: I0129 15:48:05.794759 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-l8zn7" Jan 29 15:48:06 crc kubenswrapper[4835]: I0129 15:48:06.209891 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:48:06 crc kubenswrapper[4835]: I0129 15:48:06.209966 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:48:06 crc kubenswrapper[4835]: I0129 15:48:06.215796 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:48:06 crc kubenswrapper[4835]: I0129 15:48:06.573221 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-57b8dd994f-8bws7" Jan 29 15:48:06 crc kubenswrapper[4835]: I0129 15:48:06.632551 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zskhr"] Jan 29 15:48:12 crc kubenswrapper[4835]: E0129 15:48:12.493926 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:48:16 crc kubenswrapper[4835]: I0129 15:48:16.359621 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ltd82" Jan 29 15:48:25 crc kubenswrapper[4835]: E0129 15:48:25.496104 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.523801 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl"] Jan 29 15:48:28 crc kubenswrapper[4835]: E0129 15:48:28.525020 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="registry-server" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525039 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="registry-server" Jan 29 15:48:28 crc kubenswrapper[4835]: E0129 15:48:28.525052 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="extract-content" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525059 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="extract-content" Jan 29 15:48:28 crc kubenswrapper[4835]: E0129 15:48:28.525071 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="registry-server" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525077 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="registry-server" Jan 29 15:48:28 crc kubenswrapper[4835]: E0129 15:48:28.525092 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="extract-content" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525098 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="extract-content" Jan 29 15:48:28 crc kubenswrapper[4835]: E0129 15:48:28.525107 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="extract-utilities" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525115 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="extract-utilities" Jan 29 15:48:28 crc kubenswrapper[4835]: E0129 15:48:28.525128 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="extract-utilities" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525135 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="extract-utilities" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525244 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a2b862-57f9-43ad-bdb8-f523d3c49f17" containerName="registry-server" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.525261 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="528287ca-7a32-4daf-ac69-32ad3ea4afd1" containerName="registry-server" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.526195 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.528322 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.539346 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl"] Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.707763 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.707851 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.707889 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hgjc\" (UniqueName: \"kubernetes.io/projected/fca125aa-5480-4cd7-b41e-e55568e06ffd-kube-api-access-2hgjc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.808944 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.808993 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hgjc\" (UniqueName: \"kubernetes.io/projected/fca125aa-5480-4cd7-b41e-e55568e06ffd-kube-api-access-2hgjc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.809068 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.809568 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.809581 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.826377 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hgjc\" (UniqueName: \"kubernetes.io/projected/fca125aa-5480-4cd7-b41e-e55568e06ffd-kube-api-access-2hgjc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:28 crc kubenswrapper[4835]: I0129 15:48:28.852078 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:29 crc kubenswrapper[4835]: I0129 15:48:29.051953 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl"] Jan 29 15:48:29 crc kubenswrapper[4835]: I0129 15:48:29.700334 4835 generic.go:334] "Generic (PLEG): container finished" podID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerID="dee168bc1a2d1e5b1de4aa669b93a8c16acea66faf181ed2f18f22bd29f25427" exitCode=0 Jan 29 15:48:29 crc kubenswrapper[4835]: I0129 15:48:29.700409 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" event={"ID":"fca125aa-5480-4cd7-b41e-e55568e06ffd","Type":"ContainerDied","Data":"dee168bc1a2d1e5b1de4aa669b93a8c16acea66faf181ed2f18f22bd29f25427"} Jan 29 15:48:29 crc kubenswrapper[4835]: I0129 15:48:29.700523 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" event={"ID":"fca125aa-5480-4cd7-b41e-e55568e06ffd","Type":"ContainerStarted","Data":"79330249e868b0b6d744dc097914ce2af3d2fbec49fb6b1b506d5337c9122ed1"} Jan 29 15:48:31 crc kubenswrapper[4835]: I0129 15:48:31.670197 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zskhr" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerName="console" containerID="cri-o://f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca" gracePeriod=15 Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.057031 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zskhr_d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4/console/0.log" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.057659 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153013 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-oauth-config\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153068 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-service-ca\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153105 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-config\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-oauth-serving-cert\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153218 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-trusted-ca-bundle\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153251 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78hl5\" (UniqueName: \"kubernetes.io/projected/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-kube-api-access-78hl5\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.153282 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-serving-cert\") pod \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\" (UID: \"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4\") " Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.154421 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.154501 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.154808 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-config" (OuterVolumeSpecName: "console-config") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.155013 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-service-ca" (OuterVolumeSpecName: "service-ca") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.159167 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.159449 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-kube-api-access-78hl5" (OuterVolumeSpecName: "kube-api-access-78hl5") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "kube-api-access-78hl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.159542 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" (UID: "d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254657 4835 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254700 4835 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254710 4835 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254718 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78hl5\" (UniqueName: \"kubernetes.io/projected/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-kube-api-access-78hl5\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254731 4835 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254739 4835 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.254748 4835 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.725997 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zskhr_d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4/console/0.log" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.726057 4835 generic.go:334] "Generic (PLEG): container finished" podID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerID="f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca" exitCode=2 Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.726132 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zskhr" event={"ID":"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4","Type":"ContainerDied","Data":"f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca"} Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.726170 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zskhr" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.726190 4835 scope.go:117] "RemoveContainer" containerID="f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.726172 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zskhr" event={"ID":"d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4","Type":"ContainerDied","Data":"aaf4065c34313977154d88212786021bb8dda8f579c41fca8ebc82162c15a488"} Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.732611 4835 generic.go:334] "Generic (PLEG): container finished" podID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerID="a459123b586235c533daca61ee76d4590e7cb5034294e4e13d58091a50aec453" exitCode=0 Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.732680 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" event={"ID":"fca125aa-5480-4cd7-b41e-e55568e06ffd","Type":"ContainerDied","Data":"a459123b586235c533daca61ee76d4590e7cb5034294e4e13d58091a50aec453"} Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.761533 4835 scope.go:117] "RemoveContainer" containerID="f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca" Jan 29 15:48:32 crc kubenswrapper[4835]: E0129 15:48:32.762989 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca\": container with ID starting with f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca not found: ID does not exist" containerID="f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.763064 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca"} err="failed to get container status \"f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca\": rpc error: code = NotFound desc = could not find container \"f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca\": container with ID starting with f9fdbcbd90aa559301f91779427afb9cc946b0829b47952fb780b505fefb8eca not found: ID does not exist" Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.772253 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zskhr"] Jan 29 15:48:32 crc kubenswrapper[4835]: I0129 15:48:32.777794 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zskhr"] Jan 29 15:48:33 crc kubenswrapper[4835]: I0129 15:48:33.739292 4835 generic.go:334] "Generic (PLEG): container finished" podID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerID="6e69345c97d5d7a629dd8a250c8d3adfc453cb6bacdd9862b855444096ce38ea" exitCode=0 Jan 29 15:48:33 crc kubenswrapper[4835]: I0129 15:48:33.739393 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" event={"ID":"fca125aa-5480-4cd7-b41e-e55568e06ffd","Type":"ContainerDied","Data":"6e69345c97d5d7a629dd8a250c8d3adfc453cb6bacdd9862b855444096ce38ea"} Jan 29 15:48:34 crc kubenswrapper[4835]: I0129 15:48:34.518595 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" path="/var/lib/kubelet/pods/d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4/volumes" Jan 29 15:48:34 crc kubenswrapper[4835]: I0129 15:48:34.988214 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.090261 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hgjc\" (UniqueName: \"kubernetes.io/projected/fca125aa-5480-4cd7-b41e-e55568e06ffd-kube-api-access-2hgjc\") pod \"fca125aa-5480-4cd7-b41e-e55568e06ffd\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.090378 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-bundle\") pod \"fca125aa-5480-4cd7-b41e-e55568e06ffd\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.090534 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-util\") pod \"fca125aa-5480-4cd7-b41e-e55568e06ffd\" (UID: \"fca125aa-5480-4cd7-b41e-e55568e06ffd\") " Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.091853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-bundle" (OuterVolumeSpecName: "bundle") pod "fca125aa-5480-4cd7-b41e-e55568e06ffd" (UID: "fca125aa-5480-4cd7-b41e-e55568e06ffd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.100126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fca125aa-5480-4cd7-b41e-e55568e06ffd-kube-api-access-2hgjc" (OuterVolumeSpecName: "kube-api-access-2hgjc") pod "fca125aa-5480-4cd7-b41e-e55568e06ffd" (UID: "fca125aa-5480-4cd7-b41e-e55568e06ffd"). InnerVolumeSpecName "kube-api-access-2hgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.159754 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-util" (OuterVolumeSpecName: "util") pod "fca125aa-5480-4cd7-b41e-e55568e06ffd" (UID: "fca125aa-5480-4cd7-b41e-e55568e06ffd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.192530 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hgjc\" (UniqueName: \"kubernetes.io/projected/fca125aa-5480-4cd7-b41e-e55568e06ffd-kube-api-access-2hgjc\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.192589 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.192614 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fca125aa-5480-4cd7-b41e-e55568e06ffd-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.757195 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" event={"ID":"fca125aa-5480-4cd7-b41e-e55568e06ffd","Type":"ContainerDied","Data":"79330249e868b0b6d744dc097914ce2af3d2fbec49fb6b1b506d5337c9122ed1"} Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.757262 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79330249e868b0b6d744dc097914ce2af3d2fbec49fb6b1b506d5337c9122ed1" Jan 29 15:48:35 crc kubenswrapper[4835]: I0129 15:48:35.757330 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl" Jan 29 15:48:37 crc kubenswrapper[4835]: E0129 15:48:37.494802 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.673769 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm"] Jan 29 15:48:44 crc kubenswrapper[4835]: E0129 15:48:44.674478 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="extract" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.674489 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="extract" Jan 29 15:48:44 crc kubenswrapper[4835]: E0129 15:48:44.674502 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="pull" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.674508 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="pull" Jan 29 15:48:44 crc kubenswrapper[4835]: E0129 15:48:44.674515 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerName="console" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.674523 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerName="console" Jan 29 15:48:44 crc kubenswrapper[4835]: E0129 15:48:44.674536 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="util" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.674546 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="util" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.674675 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fca125aa-5480-4cd7-b41e-e55568e06ffd" containerName="extract" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.674694 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b3c87f-8efa-4b9f-9b3d-7e15b3b935f4" containerName="console" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.675098 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.677260 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.677410 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.677515 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.677696 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-t8w99" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.679166 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.695518 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm"] Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.811978 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52588c5c-ffb3-4534-994f-ac105b5cf44d-webhook-cert\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.812036 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52588c5c-ffb3-4534-994f-ac105b5cf44d-apiservice-cert\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.812240 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9m7l\" (UniqueName: \"kubernetes.io/projected/52588c5c-ffb3-4534-994f-ac105b5cf44d-kube-api-access-r9m7l\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.913071 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9m7l\" (UniqueName: \"kubernetes.io/projected/52588c5c-ffb3-4534-994f-ac105b5cf44d-kube-api-access-r9m7l\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.913154 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52588c5c-ffb3-4534-994f-ac105b5cf44d-webhook-cert\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.913185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52588c5c-ffb3-4534-994f-ac105b5cf44d-apiservice-cert\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.920241 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52588c5c-ffb3-4534-994f-ac105b5cf44d-webhook-cert\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.924090 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52588c5c-ffb3-4534-994f-ac105b5cf44d-apiservice-cert\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.927682 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6"] Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.928334 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.930838 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-z58x4" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.931383 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.932110 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.936074 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9m7l\" (UniqueName: \"kubernetes.io/projected/52588c5c-ffb3-4534-994f-ac105b5cf44d-kube-api-access-r9m7l\") pod \"metallb-operator-controller-manager-85cbcd5479-9qqqm\" (UID: \"52588c5c-ffb3-4534-994f-ac105b5cf44d\") " pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.986234 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6"] Jan 29 15:48:44 crc kubenswrapper[4835]: I0129 15:48:44.991093 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.115526 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjbb\" (UniqueName: \"kubernetes.io/projected/88695349-f8fe-4ca5-9d25-c7c822d23e89-kube-api-access-5pjbb\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.115901 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88695349-f8fe-4ca5-9d25-c7c822d23e89-webhook-cert\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.115952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88695349-f8fe-4ca5-9d25-c7c822d23e89-apiservice-cert\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.217763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88695349-f8fe-4ca5-9d25-c7c822d23e89-apiservice-cert\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.217861 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pjbb\" (UniqueName: \"kubernetes.io/projected/88695349-f8fe-4ca5-9d25-c7c822d23e89-kube-api-access-5pjbb\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.217906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88695349-f8fe-4ca5-9d25-c7c822d23e89-webhook-cert\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.227459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/88695349-f8fe-4ca5-9d25-c7c822d23e89-apiservice-cert\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.227485 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/88695349-f8fe-4ca5-9d25-c7c822d23e89-webhook-cert\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.255213 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pjbb\" (UniqueName: \"kubernetes.io/projected/88695349-f8fe-4ca5-9d25-c7c822d23e89-kube-api-access-5pjbb\") pod \"metallb-operator-webhook-server-64dff46b5d-ndsf6\" (UID: \"88695349-f8fe-4ca5-9d25-c7c822d23e89\") " pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.276780 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.420963 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm"] Jan 29 15:48:45 crc kubenswrapper[4835]: W0129 15:48:45.450958 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52588c5c_ffb3_4534_994f_ac105b5cf44d.slice/crio-31a6b0031755f838d6e5e8d51cd192d917c2adde64b3c37a035b10d966f907e9 WatchSource:0}: Error finding container 31a6b0031755f838d6e5e8d51cd192d917c2adde64b3c37a035b10d966f907e9: Status 404 returned error can't find the container with id 31a6b0031755f838d6e5e8d51cd192d917c2adde64b3c37a035b10d966f907e9 Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.588770 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6"] Jan 29 15:48:45 crc kubenswrapper[4835]: W0129 15:48:45.592009 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88695349_f8fe_4ca5_9d25_c7c822d23e89.slice/crio-589fb62771d03c6fa3d88d32f1cfdbae8e1be11b41eb34c0e30857967f2121d9 WatchSource:0}: Error finding container 589fb62771d03c6fa3d88d32f1cfdbae8e1be11b41eb34c0e30857967f2121d9: Status 404 returned error can't find the container with id 589fb62771d03c6fa3d88d32f1cfdbae8e1be11b41eb34c0e30857967f2121d9 Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.808025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" event={"ID":"88695349-f8fe-4ca5-9d25-c7c822d23e89","Type":"ContainerStarted","Data":"589fb62771d03c6fa3d88d32f1cfdbae8e1be11b41eb34c0e30857967f2121d9"} Jan 29 15:48:45 crc kubenswrapper[4835]: I0129 15:48:45.809090 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" event={"ID":"52588c5c-ffb3-4534-994f-ac105b5cf44d","Type":"ContainerStarted","Data":"31a6b0031755f838d6e5e8d51cd192d917c2adde64b3c37a035b10d966f907e9"} Jan 29 15:48:48 crc kubenswrapper[4835]: E0129 15:48:48.504756 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:48:48 crc kubenswrapper[4835]: I0129 15:48:48.828670 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" event={"ID":"52588c5c-ffb3-4534-994f-ac105b5cf44d","Type":"ContainerStarted","Data":"7048f824eb94d3716322723cfc75532a44b1841b523165f6e63cf9da126c6496"} Jan 29 15:48:48 crc kubenswrapper[4835]: I0129 15:48:48.829027 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:48:48 crc kubenswrapper[4835]: I0129 15:48:48.851063 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" podStartSLOduration=1.875068255 podStartE2EDuration="4.851044174s" podCreationTimestamp="2026-01-29 15:48:44 +0000 UTC" firstStartedPulling="2026-01-29 15:48:45.461557556 +0000 UTC m=+1227.654601210" lastFinishedPulling="2026-01-29 15:48:48.437533475 +0000 UTC m=+1230.630577129" observedRunningTime="2026-01-29 15:48:48.845288815 +0000 UTC m=+1231.038332489" watchObservedRunningTime="2026-01-29 15:48:48.851044174 +0000 UTC m=+1231.044087828" Jan 29 15:48:56 crc kubenswrapper[4835]: I0129 15:48:56.871310 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" event={"ID":"88695349-f8fe-4ca5-9d25-c7c822d23e89","Type":"ContainerStarted","Data":"99d1dbda778130350a202b6daccb533c912eb06fb3b86a9d466dde8b65acc8a7"} Jan 29 15:48:56 crc kubenswrapper[4835]: I0129 15:48:56.872626 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:48:56 crc kubenswrapper[4835]: I0129 15:48:56.912278 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" podStartSLOduration=1.921078928 podStartE2EDuration="12.912258721s" podCreationTimestamp="2026-01-29 15:48:44 +0000 UTC" firstStartedPulling="2026-01-29 15:48:45.595337014 +0000 UTC m=+1227.788380668" lastFinishedPulling="2026-01-29 15:48:56.586516807 +0000 UTC m=+1238.779560461" observedRunningTime="2026-01-29 15:48:56.910565689 +0000 UTC m=+1239.103609343" watchObservedRunningTime="2026-01-29 15:48:56.912258721 +0000 UTC m=+1239.105302375" Jan 29 15:49:02 crc kubenswrapper[4835]: E0129 15:49:02.493861 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:49:15 crc kubenswrapper[4835]: I0129 15:49:15.283680 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-64dff46b5d-ndsf6" Jan 29 15:49:16 crc kubenswrapper[4835]: E0129 15:49:16.494413 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" Jan 29 15:49:24 crc kubenswrapper[4835]: I0129 15:49:24.993772 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85cbcd5479-9qqqm" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.581296 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hs7gp"] Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.584768 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.586865 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.586904 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.586945 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mwrxt" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.602360 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk"] Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.603228 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.605321 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.611501 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk"] Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.616720 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-startup\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.616776 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/27ff8d5f-5738-4d00-8419-6b16a94826cc-kube-api-access-pllx6\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.616812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics-certs\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.616846 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.617150 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-reloader\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.617284 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-sockets\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.617403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-conf\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.670764 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cpg6z"] Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.672131 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.674246 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.674658 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-427n4" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.676026 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.676385 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.686578 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-b6cql"] Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.687427 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.688985 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.703323 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-b6cql"] Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737225 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-metrics-certs\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737293 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-sockets\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737324 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-conf\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737450 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd52j\" (UniqueName: \"kubernetes.io/projected/d550dcee-c0e2-461b-a5a7-19cf598f3193-kube-api-access-wd52j\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737538 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztbn\" (UniqueName: \"kubernetes.io/projected/d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5-kube-api-access-sztbn\") pod \"frr-k8s-webhook-server-7df86c4f6c-jl5kk\" (UID: \"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737571 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737604 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d550dcee-c0e2-461b-a5a7-19cf598f3193-metallb-excludel2\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737650 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-startup\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737694 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlq9\" (UniqueName: \"kubernetes.io/projected/d0d89751-719b-4841-828a-bbada38fdf17-kube-api-access-6xlq9\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737760 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0d89751-719b-4841-828a-bbada38fdf17-cert\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737821 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-conf\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-sockets\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737823 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/27ff8d5f-5738-4d00-8419-6b16a94826cc-kube-api-access-pllx6\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737917 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d89751-719b-4841-828a-bbada38fdf17-metrics-certs\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737942 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics-certs\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737971 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.737997 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-reloader\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.738032 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jl5kk\" (UID: \"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: E0129 15:49:25.738155 4835 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 15:49:25 crc kubenswrapper[4835]: E0129 15:49:25.738191 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics-certs podName:27ff8d5f-5738-4d00-8419-6b16a94826cc nodeName:}" failed. No retries permitted until 2026-01-29 15:49:26.238178433 +0000 UTC m=+1268.431222087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics-certs") pod "frr-k8s-hs7gp" (UID: "27ff8d5f-5738-4d00-8419-6b16a94826cc") : secret "frr-k8s-certs-secret" not found Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.738291 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.738317 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/27ff8d5f-5738-4d00-8419-6b16a94826cc-reloader\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.738729 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/27ff8d5f-5738-4d00-8419-6b16a94826cc-frr-startup\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.759261 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/27ff8d5f-5738-4d00-8419-6b16a94826cc-kube-api-access-pllx6\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.838946 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jl5kk\" (UID: \"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839005 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-metrics-certs\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839049 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd52j\" (UniqueName: \"kubernetes.io/projected/d550dcee-c0e2-461b-a5a7-19cf598f3193-kube-api-access-wd52j\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839078 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sztbn\" (UniqueName: \"kubernetes.io/projected/d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5-kube-api-access-sztbn\") pod \"frr-k8s-webhook-server-7df86c4f6c-jl5kk\" (UID: \"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839103 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839119 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d550dcee-c0e2-461b-a5a7-19cf598f3193-metallb-excludel2\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlq9\" (UniqueName: \"kubernetes.io/projected/d0d89751-719b-4841-828a-bbada38fdf17-kube-api-access-6xlq9\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839182 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0d89751-719b-4841-828a-bbada38fdf17-cert\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839211 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d89751-719b-4841-828a-bbada38fdf17-metrics-certs\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: E0129 15:49:25.839288 4835 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:49:25 crc kubenswrapper[4835]: E0129 15:49:25.839364 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist podName:d550dcee-c0e2-461b-a5a7-19cf598f3193 nodeName:}" failed. No retries permitted until 2026-01-29 15:49:26.33934678 +0000 UTC m=+1268.532390434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist") pod "speaker-cpg6z" (UID: "d550dcee-c0e2-461b-a5a7-19cf598f3193") : secret "metallb-memberlist" not found Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.839912 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d550dcee-c0e2-461b-a5a7-19cf598f3193-metallb-excludel2\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.842043 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-metrics-certs\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.843978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d0d89751-719b-4841-828a-bbada38fdf17-metrics-certs\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.847798 4835 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.851251 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-jl5kk\" (UID: \"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.856064 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0d89751-719b-4841-828a-bbada38fdf17-cert\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.868203 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlq9\" (UniqueName: \"kubernetes.io/projected/d0d89751-719b-4841-828a-bbada38fdf17-kube-api-access-6xlq9\") pod \"controller-6968d8fdc4-b6cql\" (UID: \"d0d89751-719b-4841-828a-bbada38fdf17\") " pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.878865 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sztbn\" (UniqueName: \"kubernetes.io/projected/d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5-kube-api-access-sztbn\") pod \"frr-k8s-webhook-server-7df86c4f6c-jl5kk\" (UID: \"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.886101 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd52j\" (UniqueName: \"kubernetes.io/projected/d550dcee-c0e2-461b-a5a7-19cf598f3193-kube-api-access-wd52j\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:25 crc kubenswrapper[4835]: I0129 15:49:25.916168 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.042039 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.246771 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics-certs\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.250950 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/27ff8d5f-5738-4d00-8419-6b16a94826cc-metrics-certs\") pod \"frr-k8s-hs7gp\" (UID: \"27ff8d5f-5738-4d00-8419-6b16a94826cc\") " pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.308574 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk"] Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.348616 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:26 crc kubenswrapper[4835]: E0129 15:49:26.348828 4835 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 15:49:26 crc kubenswrapper[4835]: E0129 15:49:26.348920 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist podName:d550dcee-c0e2-461b-a5a7-19cf598f3193 nodeName:}" failed. No retries permitted until 2026-01-29 15:49:27.348897226 +0000 UTC m=+1269.541940890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist") pod "speaker-cpg6z" (UID: "d550dcee-c0e2-461b-a5a7-19cf598f3193") : secret "metallb-memberlist" not found Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.431825 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-b6cql"] Jan 29 15:49:26 crc kubenswrapper[4835]: W0129 15:49:26.438339 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0d89751_719b_4841_828a_bbada38fdf17.slice/crio-c0e0b2ae1923fe30a7b01dc58dbf86f039d2cc8fd1fd8fa5b8cdfa972ddcab0a WatchSource:0}: Error finding container c0e0b2ae1923fe30a7b01dc58dbf86f039d2cc8fd1fd8fa5b8cdfa972ddcab0a: Status 404 returned error can't find the container with id c0e0b2ae1923fe30a7b01dc58dbf86f039d2cc8fd1fd8fa5b8cdfa972ddcab0a Jan 29 15:49:26 crc kubenswrapper[4835]: I0129 15:49:26.502330 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.065057 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-b6cql" event={"ID":"d0d89751-719b-4841-828a-bbada38fdf17","Type":"ContainerStarted","Data":"a83acaa09c3c6d70c6c99347c3d4f679587531aeca0d24c28795f15ee8595b1e"} Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.065110 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-b6cql" event={"ID":"d0d89751-719b-4841-828a-bbada38fdf17","Type":"ContainerStarted","Data":"557411071e10e81c08c5156d3016b4e04884a5c6a7af257e4f9abedc41c538c1"} Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.065124 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-b6cql" event={"ID":"d0d89751-719b-4841-828a-bbada38fdf17","Type":"ContainerStarted","Data":"c0e0b2ae1923fe30a7b01dc58dbf86f039d2cc8fd1fd8fa5b8cdfa972ddcab0a"} Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.065177 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.066117 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"23ddc2f11dc0aac736dbab6b960fa15fa19770925492a49faecef9fddc2e3f1c"} Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.066942 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" event={"ID":"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5","Type":"ContainerStarted","Data":"844d093bfae1b7d8c116ae6b03f6079b5c9e3f48a37616991a9df78fff7abde3"} Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.082610 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-b6cql" podStartSLOduration=2.0825909989999998 podStartE2EDuration="2.082590999s" podCreationTimestamp="2026-01-29 15:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:27.078541718 +0000 UTC m=+1269.271585382" watchObservedRunningTime="2026-01-29 15:49:27.082590999 +0000 UTC m=+1269.275634653" Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.371422 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.381459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d550dcee-c0e2-461b-a5a7-19cf598f3193-memberlist\") pod \"speaker-cpg6z\" (UID: \"d550dcee-c0e2-461b-a5a7-19cf598f3193\") " pod="metallb-system/speaker-cpg6z" Jan 29 15:49:27 crc kubenswrapper[4835]: I0129 15:49:27.485727 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cpg6z" Jan 29 15:49:27 crc kubenswrapper[4835]: W0129 15:49:27.507126 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd550dcee_c0e2_461b_a5a7_19cf598f3193.slice/crio-b4f5e277ba0649309257b22be5472d9533da65f12c32a63abd913df7fcd3e9f7 WatchSource:0}: Error finding container b4f5e277ba0649309257b22be5472d9533da65f12c32a63abd913df7fcd3e9f7: Status 404 returned error can't find the container with id b4f5e277ba0649309257b22be5472d9533da65f12c32a63abd913df7fcd3e9f7 Jan 29 15:49:28 crc kubenswrapper[4835]: I0129 15:49:28.076264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cpg6z" event={"ID":"d550dcee-c0e2-461b-a5a7-19cf598f3193","Type":"ContainerStarted","Data":"113a968d1816e0d69957eebb9b4940a6b0ad03258d097662238320f2edfe7288"} Jan 29 15:49:28 crc kubenswrapper[4835]: I0129 15:49:28.076540 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cpg6z" event={"ID":"d550dcee-c0e2-461b-a5a7-19cf598f3193","Type":"ContainerStarted","Data":"3b498ce27b378d4642485b4e7a00226cd3bf1b79d6441de868abd74ef1f767cd"} Jan 29 15:49:28 crc kubenswrapper[4835]: I0129 15:49:28.076551 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cpg6z" event={"ID":"d550dcee-c0e2-461b-a5a7-19cf598f3193","Type":"ContainerStarted","Data":"b4f5e277ba0649309257b22be5472d9533da65f12c32a63abd913df7fcd3e9f7"} Jan 29 15:49:28 crc kubenswrapper[4835]: I0129 15:49:28.077030 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cpg6z" Jan 29 15:49:28 crc kubenswrapper[4835]: I0129 15:49:28.530823 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cpg6z" podStartSLOduration=3.530803968 podStartE2EDuration="3.530803968s" podCreationTimestamp="2026-01-29 15:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:49:28.104221955 +0000 UTC m=+1270.297265599" watchObservedRunningTime="2026-01-29 15:49:28.530803968 +0000 UTC m=+1270.723847622" Jan 29 15:49:30 crc kubenswrapper[4835]: I0129 15:49:30.095607 4835 generic.go:334] "Generic (PLEG): container finished" podID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerID="18d0bc76366f63df8f6fedc6b632c0b34138db27ef14f574899461d33b9828ce" exitCode=0 Jan 29 15:49:30 crc kubenswrapper[4835]: I0129 15:49:30.095749 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zmb" event={"ID":"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4","Type":"ContainerDied","Data":"18d0bc76366f63df8f6fedc6b632c0b34138db27ef14f574899461d33b9828ce"} Jan 29 15:49:34 crc kubenswrapper[4835]: I0129 15:49:34.131036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zmb" event={"ID":"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4","Type":"ContainerStarted","Data":"aff8870056d07e144feba9942d0334d2ebcabc6bb5b969b876b853c7aa1dbedb"} Jan 29 15:49:34 crc kubenswrapper[4835]: I0129 15:49:34.133438 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" event={"ID":"d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5","Type":"ContainerStarted","Data":"b657de05c53d774b3295bab77354e27c08aa2fdab62ab03bfd432d150253d221"} Jan 29 15:49:34 crc kubenswrapper[4835]: I0129 15:49:34.133622 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:34 crc kubenswrapper[4835]: I0129 15:49:34.159441 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x6zmb" podStartSLOduration=3.88122688 podStartE2EDuration="5m44.159406875s" podCreationTimestamp="2026-01-29 15:43:50 +0000 UTC" firstStartedPulling="2026-01-29 15:43:52.621369777 +0000 UTC m=+934.814413431" lastFinishedPulling="2026-01-29 15:49:32.899549782 +0000 UTC m=+1275.092593426" observedRunningTime="2026-01-29 15:49:34.152893293 +0000 UTC m=+1276.345936957" watchObservedRunningTime="2026-01-29 15:49:34.159406875 +0000 UTC m=+1276.352450539" Jan 29 15:49:34 crc kubenswrapper[4835]: I0129 15:49:34.174791 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" podStartSLOduration=2.570390104 podStartE2EDuration="9.174763417s" podCreationTimestamp="2026-01-29 15:49:25 +0000 UTC" firstStartedPulling="2026-01-29 15:49:26.314924431 +0000 UTC m=+1268.507968085" lastFinishedPulling="2026-01-29 15:49:32.919297744 +0000 UTC m=+1275.112341398" observedRunningTime="2026-01-29 15:49:34.173884895 +0000 UTC m=+1276.366928549" watchObservedRunningTime="2026-01-29 15:49:34.174763417 +0000 UTC m=+1276.367807071" Jan 29 15:49:35 crc kubenswrapper[4835]: I0129 15:49:35.140172 4835 generic.go:334] "Generic (PLEG): container finished" podID="27ff8d5f-5738-4d00-8419-6b16a94826cc" containerID="498b65a032f85e2fb42fc61d4feda174e0aca7e745ad24d5a96fe15c5dcc71cc" exitCode=0 Jan 29 15:49:35 crc kubenswrapper[4835]: I0129 15:49:35.140217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerDied","Data":"498b65a032f85e2fb42fc61d4feda174e0aca7e745ad24d5a96fe15c5dcc71cc"} Jan 29 15:49:36 crc kubenswrapper[4835]: I0129 15:49:36.046148 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-b6cql" Jan 29 15:49:37 crc kubenswrapper[4835]: I0129 15:49:37.154089 4835 generic.go:334] "Generic (PLEG): container finished" podID="27ff8d5f-5738-4d00-8419-6b16a94826cc" containerID="85fb19f14468fd20a4e856e007af9a7c87f4241601fa98afd74f6933d3c4f92b" exitCode=0 Jan 29 15:49:37 crc kubenswrapper[4835]: I0129 15:49:37.154159 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerDied","Data":"85fb19f14468fd20a4e856e007af9a7c87f4241601fa98afd74f6933d3c4f92b"} Jan 29 15:49:37 crc kubenswrapper[4835]: I0129 15:49:37.491372 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cpg6z" Jan 29 15:49:38 crc kubenswrapper[4835]: I0129 15:49:38.163028 4835 generic.go:334] "Generic (PLEG): container finished" podID="27ff8d5f-5738-4d00-8419-6b16a94826cc" containerID="4771ada447832fd3597ae7ceedfd4d221096f33d3ced6dde4b24911857a09df1" exitCode=0 Jan 29 15:49:38 crc kubenswrapper[4835]: I0129 15:49:38.163066 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerDied","Data":"4771ada447832fd3597ae7ceedfd4d221096f33d3ced6dde4b24911857a09df1"} Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.015583 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2"] Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.016695 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.018301 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.032266 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2"] Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.130620 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.130690 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrtxq\" (UniqueName: \"kubernetes.io/projected/c34b77b3-bdfd-45de-b618-c998472f00de-kube-api-access-jrtxq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.130729 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.181545 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"59ea55d19ac503ff89307fe2643405969e2ce20812017a81e55c31124baeb90b"} Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.181891 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"f7acc671a51cfef96e30e90169a1732d65aba412481e0575ece000919f997bac"} Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.181902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"2a0a922490f42c3b5c4695f08d2755399025e3a5c6b7a4d3dfca3463e5ec8e0f"} Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.181911 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"f2c32c6f4404cd15c01c0177d3b87ddc6d28a6d0b071e39f2bed7133e7aedc27"} Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.181918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"06aecbac56205631b41f726cb3cde98e1baef4a45984f8ebb2ad30fb24e9c007"} Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.231756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.231909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.231943 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrtxq\" (UniqueName: \"kubernetes.io/projected/c34b77b3-bdfd-45de-b618-c998472f00de-kube-api-access-jrtxq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.232292 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.232665 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.253105 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrtxq\" (UniqueName: \"kubernetes.io/projected/c34b77b3-bdfd-45de-b618-c998472f00de-kube-api-access-jrtxq\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.341367 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:39 crc kubenswrapper[4835]: I0129 15:49:39.562175 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2"] Jan 29 15:49:39 crc kubenswrapper[4835]: W0129 15:49:39.588050 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc34b77b3_bdfd_45de_b618_c998472f00de.slice/crio-48ccc0422926706271ef465ab1bcc94889f4c749be2719205da0b297cd8f6eec WatchSource:0}: Error finding container 48ccc0422926706271ef465ab1bcc94889f4c749be2719205da0b297cd8f6eec: Status 404 returned error can't find the container with id 48ccc0422926706271ef465ab1bcc94889f4c749be2719205da0b297cd8f6eec Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.188207 4835 generic.go:334] "Generic (PLEG): container finished" podID="c34b77b3-bdfd-45de-b618-c998472f00de" containerID="ae7443b77af474488176156777b42700787dce107ef0d16d6252076f1c896a75" exitCode=0 Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.188270 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" event={"ID":"c34b77b3-bdfd-45de-b618-c998472f00de","Type":"ContainerDied","Data":"ae7443b77af474488176156777b42700787dce107ef0d16d6252076f1c896a75"} Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.188299 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" event={"ID":"c34b77b3-bdfd-45de-b618-c998472f00de","Type":"ContainerStarted","Data":"48ccc0422926706271ef465ab1bcc94889f4c749be2719205da0b297cd8f6eec"} Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.196833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hs7gp" event={"ID":"27ff8d5f-5738-4d00-8419-6b16a94826cc","Type":"ContainerStarted","Data":"33fe62b5857096e45ca9de94d9f17707c5fd9045d79475c878f164e0717f8249"} Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.197009 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.230772 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hs7gp" podStartSLOduration=7.455212007 podStartE2EDuration="15.230748576s" podCreationTimestamp="2026-01-29 15:49:25 +0000 UTC" firstStartedPulling="2026-01-29 15:49:26.590922677 +0000 UTC m=+1268.783966321" lastFinishedPulling="2026-01-29 15:49:34.366459236 +0000 UTC m=+1276.559502890" observedRunningTime="2026-01-29 15:49:40.227562657 +0000 UTC m=+1282.420606321" watchObservedRunningTime="2026-01-29 15:49:40.230748576 +0000 UTC m=+1282.423792240" Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.964869 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:49:40 crc kubenswrapper[4835]: I0129 15:49:40.965209 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:49:41 crc kubenswrapper[4835]: I0129 15:49:41.005328 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:49:41 crc kubenswrapper[4835]: I0129 15:49:41.242194 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:49:41 crc kubenswrapper[4835]: I0129 15:49:41.502870 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:41 crc kubenswrapper[4835]: I0129 15:49:41.543268 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:43 crc kubenswrapper[4835]: I0129 15:49:43.349385 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zmb"] Jan 29 15:49:43 crc kubenswrapper[4835]: I0129 15:49:43.349894 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x6zmb" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="registry-server" containerID="cri-o://aff8870056d07e144feba9942d0334d2ebcabc6bb5b969b876b853c7aa1dbedb" gracePeriod=2 Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.237019 4835 generic.go:334] "Generic (PLEG): container finished" podID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerID="aff8870056d07e144feba9942d0334d2ebcabc6bb5b969b876b853c7aa1dbedb" exitCode=0 Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.237304 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zmb" event={"ID":"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4","Type":"ContainerDied","Data":"aff8870056d07e144feba9942d0334d2ebcabc6bb5b969b876b853c7aa1dbedb"} Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.537648 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.620408 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-catalog-content\") pod \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.620540 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-utilities\") pod \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.620643 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrkq8\" (UniqueName: \"kubernetes.io/projected/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-kube-api-access-zrkq8\") pod \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\" (UID: \"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4\") " Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.621563 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-utilities" (OuterVolumeSpecName: "utilities") pod "b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" (UID: "b35c1531-391d-4ce7-8e47-7dfd3c9cefd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.626482 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-kube-api-access-zrkq8" (OuterVolumeSpecName: "kube-api-access-zrkq8") pod "b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" (UID: "b35c1531-391d-4ce7-8e47-7dfd3c9cefd4"). InnerVolumeSpecName "kube-api-access-zrkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.640257 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" (UID: "b35c1531-391d-4ce7-8e47-7dfd3c9cefd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.722066 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrkq8\" (UniqueName: \"kubernetes.io/projected/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-kube-api-access-zrkq8\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.722122 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:44 crc kubenswrapper[4835]: I0129 15:49:44.722142 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.244495 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6zmb" event={"ID":"b35c1531-391d-4ce7-8e47-7dfd3c9cefd4","Type":"ContainerDied","Data":"4e575c4e8d607623ea6b39738db0f609b497852fc4b64aa0a1ef2999860d252e"} Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.244563 4835 scope.go:117] "RemoveContainer" containerID="aff8870056d07e144feba9942d0334d2ebcabc6bb5b969b876b853c7aa1dbedb" Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.244625 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6zmb" Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.273310 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zmb"] Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.280627 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6zmb"] Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.722435 4835 scope.go:117] "RemoveContainer" containerID="18d0bc76366f63df8f6fedc6b632c0b34138db27ef14f574899461d33b9828ce" Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.758068 4835 scope.go:117] "RemoveContainer" containerID="6b28d5eeca741cf1d3d6cea9b4d6bb96bfe384efb5fc75e27315e8efe5a18f6c" Jan 29 15:49:45 crc kubenswrapper[4835]: I0129 15:49:45.927785 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-jl5kk" Jan 29 15:49:46 crc kubenswrapper[4835]: I0129 15:49:46.254669 4835 generic.go:334] "Generic (PLEG): container finished" podID="c34b77b3-bdfd-45de-b618-c998472f00de" containerID="06c01ebd10f6f0c9d63e1352f27a551714b41998f6b56e053d1d7afb16a1c821" exitCode=0 Jan 29 15:49:46 crc kubenswrapper[4835]: I0129 15:49:46.254833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" event={"ID":"c34b77b3-bdfd-45de-b618-c998472f00de","Type":"ContainerDied","Data":"06c01ebd10f6f0c9d63e1352f27a551714b41998f6b56e053d1d7afb16a1c821"} Jan 29 15:49:46 crc kubenswrapper[4835]: I0129 15:49:46.503822 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" path="/var/lib/kubelet/pods/b35c1531-391d-4ce7-8e47-7dfd3c9cefd4/volumes" Jan 29 15:49:47 crc kubenswrapper[4835]: I0129 15:49:47.263768 4835 generic.go:334] "Generic (PLEG): container finished" podID="c34b77b3-bdfd-45de-b618-c998472f00de" containerID="7f9ee9880324de9b462faeea2f3e866ec0471cbdfd0b06d255cf0d0a14e5c3d3" exitCode=0 Jan 29 15:49:47 crc kubenswrapper[4835]: I0129 15:49:47.263817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" event={"ID":"c34b77b3-bdfd-45de-b618-c998472f00de","Type":"ContainerDied","Data":"7f9ee9880324de9b462faeea2f3e866ec0471cbdfd0b06d255cf0d0a14e5c3d3"} Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.585688 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.675687 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrtxq\" (UniqueName: \"kubernetes.io/projected/c34b77b3-bdfd-45de-b618-c998472f00de-kube-api-access-jrtxq\") pod \"c34b77b3-bdfd-45de-b618-c998472f00de\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.676073 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-bundle\") pod \"c34b77b3-bdfd-45de-b618-c998472f00de\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.677018 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-util\") pod \"c34b77b3-bdfd-45de-b618-c998472f00de\" (UID: \"c34b77b3-bdfd-45de-b618-c998472f00de\") " Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.677146 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-bundle" (OuterVolumeSpecName: "bundle") pod "c34b77b3-bdfd-45de-b618-c998472f00de" (UID: "c34b77b3-bdfd-45de-b618-c998472f00de"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.677497 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.685746 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34b77b3-bdfd-45de-b618-c998472f00de-kube-api-access-jrtxq" (OuterVolumeSpecName: "kube-api-access-jrtxq") pod "c34b77b3-bdfd-45de-b618-c998472f00de" (UID: "c34b77b3-bdfd-45de-b618-c998472f00de"). InnerVolumeSpecName "kube-api-access-jrtxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.686040 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-util" (OuterVolumeSpecName: "util") pod "c34b77b3-bdfd-45de-b618-c998472f00de" (UID: "c34b77b3-bdfd-45de-b618-c998472f00de"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.778496 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b77b3-bdfd-45de-b618-c998472f00de-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:48 crc kubenswrapper[4835]: I0129 15:49:48.778529 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrtxq\" (UniqueName: \"kubernetes.io/projected/c34b77b3-bdfd-45de-b618-c998472f00de-kube-api-access-jrtxq\") on node \"crc\" DevicePath \"\"" Jan 29 15:49:49 crc kubenswrapper[4835]: I0129 15:49:49.281946 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" event={"ID":"c34b77b3-bdfd-45de-b618-c998472f00de","Type":"ContainerDied","Data":"48ccc0422926706271ef465ab1bcc94889f4c749be2719205da0b297cd8f6eec"} Jan 29 15:49:49 crc kubenswrapper[4835]: I0129 15:49:49.282000 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ccc0422926706271ef465ab1bcc94889f4c749be2719205da0b297cd8f6eec" Jan 29 15:49:49 crc kubenswrapper[4835]: I0129 15:49:49.282099 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2" Jan 29 15:49:56 crc kubenswrapper[4835]: I0129 15:49:56.505802 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hs7gp" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.293866 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs"] Jan 29 15:49:57 crc kubenswrapper[4835]: E0129 15:49:57.294944 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="registry-server" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.294961 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="registry-server" Jan 29 15:49:57 crc kubenswrapper[4835]: E0129 15:49:57.294975 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="pull" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.294983 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="pull" Jan 29 15:49:57 crc kubenswrapper[4835]: E0129 15:49:57.294995 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="extract-content" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295004 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="extract-content" Jan 29 15:49:57 crc kubenswrapper[4835]: E0129 15:49:57.295022 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="extract" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295029 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="extract" Jan 29 15:49:57 crc kubenswrapper[4835]: E0129 15:49:57.295045 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="extract-utilities" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295052 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="extract-utilities" Jan 29 15:49:57 crc kubenswrapper[4835]: E0129 15:49:57.295060 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="util" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295069 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="util" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295264 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34b77b3-bdfd-45de-b618-c998472f00de" containerName="extract" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295280 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35c1531-391d-4ce7-8e47-7dfd3c9cefd4" containerName="registry-server" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.295864 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.297349 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-mfzps" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.298523 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.300312 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.317000 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs"] Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.388643 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hj4cs\" (UID: \"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.388751 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fk9\" (UniqueName: \"kubernetes.io/projected/b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5-kube-api-access-w8fk9\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hj4cs\" (UID: \"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.489693 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fk9\" (UniqueName: \"kubernetes.io/projected/b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5-kube-api-access-w8fk9\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hj4cs\" (UID: \"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.489777 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hj4cs\" (UID: \"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.490155 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hj4cs\" (UID: \"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.510451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fk9\" (UniqueName: \"kubernetes.io/projected/b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5-kube-api-access-w8fk9\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hj4cs\" (UID: \"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:57 crc kubenswrapper[4835]: I0129 15:49:57.616982 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" Jan 29 15:49:58 crc kubenswrapper[4835]: I0129 15:49:58.012441 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs"] Jan 29 15:49:58 crc kubenswrapper[4835]: I0129 15:49:58.335572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" event={"ID":"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5","Type":"ContainerStarted","Data":"2773805ee5240a308b6c4556eca2f54a400d51ce9f39a5a8f6e3de0b505efd1f"} Jan 29 15:50:03 crc kubenswrapper[4835]: I0129 15:50:03.366491 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" event={"ID":"b67dd6ab-6501-41c9-8f4f-d1a0fb8579d5","Type":"ContainerStarted","Data":"ffb158139e6eac19da6c4b4a02fb3fc3d2c5f6d741a8456a1e7645d570be249d"} Jan 29 15:50:03 crc kubenswrapper[4835]: I0129 15:50:03.391384 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hj4cs" podStartSLOduration=1.917767618 podStartE2EDuration="6.391362482s" podCreationTimestamp="2026-01-29 15:49:57 +0000 UTC" firstStartedPulling="2026-01-29 15:49:58.020801763 +0000 UTC m=+1300.213845417" lastFinishedPulling="2026-01-29 15:50:02.494396627 +0000 UTC m=+1304.687440281" observedRunningTime="2026-01-29 15:50:03.385745462 +0000 UTC m=+1305.578789146" watchObservedRunningTime="2026-01-29 15:50:03.391362482 +0000 UTC m=+1305.584406146" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.583092 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-45j7x"] Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.584294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.585982 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-stm92" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.586287 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.587558 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.601046 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-45j7x"] Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.717092 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08165ae4-532a-4357-9f95-1a1aeabe56ab-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-45j7x\" (UID: \"08165ae4-532a-4357-9f95-1a1aeabe56ab\") " pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.717182 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxr6t\" (UniqueName: \"kubernetes.io/projected/08165ae4-532a-4357-9f95-1a1aeabe56ab-kube-api-access-vxr6t\") pod \"cert-manager-webhook-6888856db4-45j7x\" (UID: \"08165ae4-532a-4357-9f95-1a1aeabe56ab\") " pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.818440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08165ae4-532a-4357-9f95-1a1aeabe56ab-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-45j7x\" (UID: \"08165ae4-532a-4357-9f95-1a1aeabe56ab\") " pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.818529 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxr6t\" (UniqueName: \"kubernetes.io/projected/08165ae4-532a-4357-9f95-1a1aeabe56ab-kube-api-access-vxr6t\") pod \"cert-manager-webhook-6888856db4-45j7x\" (UID: \"08165ae4-532a-4357-9f95-1a1aeabe56ab\") " pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.835082 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/08165ae4-532a-4357-9f95-1a1aeabe56ab-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-45j7x\" (UID: \"08165ae4-532a-4357-9f95-1a1aeabe56ab\") " pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.835333 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxr6t\" (UniqueName: \"kubernetes.io/projected/08165ae4-532a-4357-9f95-1a1aeabe56ab-kube-api-access-vxr6t\") pod \"cert-manager-webhook-6888856db4-45j7x\" (UID: \"08165ae4-532a-4357-9f95-1a1aeabe56ab\") " pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:05 crc kubenswrapper[4835]: I0129 15:50:05.902023 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:06 crc kubenswrapper[4835]: I0129 15:50:06.342961 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-45j7x"] Jan 29 15:50:06 crc kubenswrapper[4835]: I0129 15:50:06.392848 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" event={"ID":"08165ae4-532a-4357-9f95-1a1aeabe56ab","Type":"ContainerStarted","Data":"74f5b549e229c83be2c9be26f2133b1789578f6d703950e63afc7fb4125de78e"} Jan 29 15:50:07 crc kubenswrapper[4835]: I0129 15:50:07.784914 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-f7fvv"] Jan 29 15:50:07 crc kubenswrapper[4835]: I0129 15:50:07.786070 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:07 crc kubenswrapper[4835]: I0129 15:50:07.787965 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-x59xz" Jan 29 15:50:07 crc kubenswrapper[4835]: I0129 15:50:07.802736 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-f7fvv"] Jan 29 15:50:07 crc kubenswrapper[4835]: I0129 15:50:07.944040 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62d89b5b-077f-4d52-aaa5-7feab7ea8063-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-f7fvv\" (UID: \"62d89b5b-077f-4d52-aaa5-7feab7ea8063\") " pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:07 crc kubenswrapper[4835]: I0129 15:50:07.944308 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktptj\" (UniqueName: \"kubernetes.io/projected/62d89b5b-077f-4d52-aaa5-7feab7ea8063-kube-api-access-ktptj\") pod \"cert-manager-cainjector-5545bd876-f7fvv\" (UID: \"62d89b5b-077f-4d52-aaa5-7feab7ea8063\") " pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:08 crc kubenswrapper[4835]: I0129 15:50:08.045808 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktptj\" (UniqueName: \"kubernetes.io/projected/62d89b5b-077f-4d52-aaa5-7feab7ea8063-kube-api-access-ktptj\") pod \"cert-manager-cainjector-5545bd876-f7fvv\" (UID: \"62d89b5b-077f-4d52-aaa5-7feab7ea8063\") " pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:08 crc kubenswrapper[4835]: I0129 15:50:08.045901 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62d89b5b-077f-4d52-aaa5-7feab7ea8063-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-f7fvv\" (UID: \"62d89b5b-077f-4d52-aaa5-7feab7ea8063\") " pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:08 crc kubenswrapper[4835]: I0129 15:50:08.070622 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktptj\" (UniqueName: \"kubernetes.io/projected/62d89b5b-077f-4d52-aaa5-7feab7ea8063-kube-api-access-ktptj\") pod \"cert-manager-cainjector-5545bd876-f7fvv\" (UID: \"62d89b5b-077f-4d52-aaa5-7feab7ea8063\") " pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:08 crc kubenswrapper[4835]: I0129 15:50:08.071938 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62d89b5b-077f-4d52-aaa5-7feab7ea8063-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-f7fvv\" (UID: \"62d89b5b-077f-4d52-aaa5-7feab7ea8063\") " pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:08 crc kubenswrapper[4835]: I0129 15:50:08.105745 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" Jan 29 15:50:08 crc kubenswrapper[4835]: I0129 15:50:08.538141 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-f7fvv"] Jan 29 15:50:09 crc kubenswrapper[4835]: I0129 15:50:09.422317 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" event={"ID":"62d89b5b-077f-4d52-aaa5-7feab7ea8063","Type":"ContainerStarted","Data":"8a1faff97c36b700d28fa3425291c932a594a908f72aa6bc9b5a5ee18be8c99e"} Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.500161 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-4d2jn"] Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.502034 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.506298 4835 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tnkj7" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.527885 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-4d2jn"] Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.607818 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxh8t\" (UniqueName: \"kubernetes.io/projected/a5e7bd98-b92f-4603-9151-47c925db8362-kube-api-access-sxh8t\") pod \"cert-manager-545d4d4674-4d2jn\" (UID: \"a5e7bd98-b92f-4603-9151-47c925db8362\") " pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.607965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5e7bd98-b92f-4603-9151-47c925db8362-bound-sa-token\") pod \"cert-manager-545d4d4674-4d2jn\" (UID: \"a5e7bd98-b92f-4603-9151-47c925db8362\") " pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.709699 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxh8t\" (UniqueName: \"kubernetes.io/projected/a5e7bd98-b92f-4603-9151-47c925db8362-kube-api-access-sxh8t\") pod \"cert-manager-545d4d4674-4d2jn\" (UID: \"a5e7bd98-b92f-4603-9151-47c925db8362\") " pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.709815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5e7bd98-b92f-4603-9151-47c925db8362-bound-sa-token\") pod \"cert-manager-545d4d4674-4d2jn\" (UID: \"a5e7bd98-b92f-4603-9151-47c925db8362\") " pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.729538 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxh8t\" (UniqueName: \"kubernetes.io/projected/a5e7bd98-b92f-4603-9151-47c925db8362-kube-api-access-sxh8t\") pod \"cert-manager-545d4d4674-4d2jn\" (UID: \"a5e7bd98-b92f-4603-9151-47c925db8362\") " pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.734011 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a5e7bd98-b92f-4603-9151-47c925db8362-bound-sa-token\") pod \"cert-manager-545d4d4674-4d2jn\" (UID: \"a5e7bd98-b92f-4603-9151-47c925db8362\") " pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:19 crc kubenswrapper[4835]: I0129 15:50:19.826161 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-4d2jn" Jan 29 15:50:21 crc kubenswrapper[4835]: I0129 15:50:21.614199 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:50:21 crc kubenswrapper[4835]: I0129 15:50:21.614575 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:50:21 crc kubenswrapper[4835]: I0129 15:50:21.908554 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-4d2jn"] Jan 29 15:50:21 crc kubenswrapper[4835]: W0129 15:50:21.917693 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5e7bd98_b92f_4603_9151_47c925db8362.slice/crio-59b75d6883f37be22e36448cdc24b1a0451e4819be0ce09354ee80b26843393d WatchSource:0}: Error finding container 59b75d6883f37be22e36448cdc24b1a0451e4819be0ce09354ee80b26843393d: Status 404 returned error can't find the container with id 59b75d6883f37be22e36448cdc24b1a0451e4819be0ce09354ee80b26843393d Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.509902 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" event={"ID":"62d89b5b-077f-4d52-aaa5-7feab7ea8063","Type":"ContainerStarted","Data":"613f4db28386fab474e7b7ecf5305445f207212a8e01bdb03709459fb63a086a"} Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.511679 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" event={"ID":"08165ae4-532a-4357-9f95-1a1aeabe56ab","Type":"ContainerStarted","Data":"e3a14ecffbd7d87aa42427113be923b4171c9c641b111036e24391945bafe8e8"} Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.512987 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-4d2jn" event={"ID":"a5e7bd98-b92f-4603-9151-47c925db8362","Type":"ContainerStarted","Data":"79311d38e758daf3261efcd25c0d56634bff6eea06e54edb0e0c242971a98a3d"} Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.513014 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-4d2jn" event={"ID":"a5e7bd98-b92f-4603-9151-47c925db8362","Type":"ContainerStarted","Data":"59b75d6883f37be22e36448cdc24b1a0451e4819be0ce09354ee80b26843393d"} Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.537030 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-f7fvv" podStartSLOduration=2.573406677 podStartE2EDuration="15.537008953s" podCreationTimestamp="2026-01-29 15:50:07 +0000 UTC" firstStartedPulling="2026-01-29 15:50:08.736760174 +0000 UTC m=+1310.929803828" lastFinishedPulling="2026-01-29 15:50:21.70036245 +0000 UTC m=+1323.893406104" observedRunningTime="2026-01-29 15:50:22.531550008 +0000 UTC m=+1324.724593692" watchObservedRunningTime="2026-01-29 15:50:22.537008953 +0000 UTC m=+1324.730052647" Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.850175 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-4d2jn" podStartSLOduration=3.84797312 podStartE2EDuration="3.84797312s" podCreationTimestamp="2026-01-29 15:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:50:22.829811148 +0000 UTC m=+1325.022854832" watchObservedRunningTime="2026-01-29 15:50:22.84797312 +0000 UTC m=+1325.041016774" Jan 29 15:50:22 crc kubenswrapper[4835]: I0129 15:50:22.859937 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" podStartSLOduration=2.518998528 podStartE2EDuration="17.859922457s" podCreationTimestamp="2026-01-29 15:50:05 +0000 UTC" firstStartedPulling="2026-01-29 15:50:06.359431541 +0000 UTC m=+1308.552475195" lastFinishedPulling="2026-01-29 15:50:21.70035545 +0000 UTC m=+1323.893399124" observedRunningTime="2026-01-29 15:50:22.859077586 +0000 UTC m=+1325.052121280" watchObservedRunningTime="2026-01-29 15:50:22.859922457 +0000 UTC m=+1325.052966111" Jan 29 15:50:23 crc kubenswrapper[4835]: I0129 15:50:23.517751 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:30 crc kubenswrapper[4835]: I0129 15:50:30.929802 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-45j7x" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.461852 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9p89g"] Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.464876 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.466952 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.468769 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rdlsk" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.469372 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.476006 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9p89g"] Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.563197 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgjq\" (UniqueName: \"kubernetes.io/projected/8a0f14f9-9811-445b-a3ce-d1067aa9d051-kube-api-access-hlgjq\") pod \"openstack-operator-index-9p89g\" (UID: \"8a0f14f9-9811-445b-a3ce-d1067aa9d051\") " pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.664637 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgjq\" (UniqueName: \"kubernetes.io/projected/8a0f14f9-9811-445b-a3ce-d1067aa9d051-kube-api-access-hlgjq\") pod \"openstack-operator-index-9p89g\" (UID: \"8a0f14f9-9811-445b-a3ce-d1067aa9d051\") " pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.684740 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgjq\" (UniqueName: \"kubernetes.io/projected/8a0f14f9-9811-445b-a3ce-d1067aa9d051-kube-api-access-hlgjq\") pod \"openstack-operator-index-9p89g\" (UID: \"8a0f14f9-9811-445b-a3ce-d1067aa9d051\") " pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:37 crc kubenswrapper[4835]: I0129 15:50:37.799183 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:38 crc kubenswrapper[4835]: I0129 15:50:38.311665 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9p89g"] Jan 29 15:50:38 crc kubenswrapper[4835]: I0129 15:50:38.611098 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9p89g" event={"ID":"8a0f14f9-9811-445b-a3ce-d1067aa9d051","Type":"ContainerStarted","Data":"d7ace111fce24661f45e456e6ae9eabc7620b1b48e377fe52037f7baaed6e015"} Jan 29 15:50:42 crc kubenswrapper[4835]: I0129 15:50:42.842394 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9p89g"] Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.449602 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8n788"] Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.451628 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.462394 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8n788"] Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.563209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9fdw\" (UniqueName: \"kubernetes.io/projected/13c4685b-af72-42f0-8ba3-1d3ef0e82c87-kube-api-access-l9fdw\") pod \"openstack-operator-index-8n788\" (UID: \"13c4685b-af72-42f0-8ba3-1d3ef0e82c87\") " pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.665670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9fdw\" (UniqueName: \"kubernetes.io/projected/13c4685b-af72-42f0-8ba3-1d3ef0e82c87-kube-api-access-l9fdw\") pod \"openstack-operator-index-8n788\" (UID: \"13c4685b-af72-42f0-8ba3-1d3ef0e82c87\") " pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.726855 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9fdw\" (UniqueName: \"kubernetes.io/projected/13c4685b-af72-42f0-8ba3-1d3ef0e82c87-kube-api-access-l9fdw\") pod \"openstack-operator-index-8n788\" (UID: \"13c4685b-af72-42f0-8ba3-1d3ef0e82c87\") " pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:43 crc kubenswrapper[4835]: I0129 15:50:43.782022 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:44 crc kubenswrapper[4835]: I0129 15:50:44.435708 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8n788"] Jan 29 15:50:44 crc kubenswrapper[4835]: W0129 15:50:44.443420 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c4685b_af72_42f0_8ba3_1d3ef0e82c87.slice/crio-0c8e9d2309f5fa209ff29ca5912c02ecff4b04fee9a163980b23fc830e048067 WatchSource:0}: Error finding container 0c8e9d2309f5fa209ff29ca5912c02ecff4b04fee9a163980b23fc830e048067: Status 404 returned error can't find the container with id 0c8e9d2309f5fa209ff29ca5912c02ecff4b04fee9a163980b23fc830e048067 Jan 29 15:50:44 crc kubenswrapper[4835]: I0129 15:50:44.657028 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8n788" event={"ID":"13c4685b-af72-42f0-8ba3-1d3ef0e82c87","Type":"ContainerStarted","Data":"0c8e9d2309f5fa209ff29ca5912c02ecff4b04fee9a163980b23fc830e048067"} Jan 29 15:50:44 crc kubenswrapper[4835]: I0129 15:50:44.658954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9p89g" event={"ID":"8a0f14f9-9811-445b-a3ce-d1067aa9d051","Type":"ContainerStarted","Data":"be221ced17260b09f116c0341d02401ac2eb2616beec5d32e11a38f792fd06bd"} Jan 29 15:50:44 crc kubenswrapper[4835]: I0129 15:50:44.659146 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9p89g" podUID="8a0f14f9-9811-445b-a3ce-d1067aa9d051" containerName="registry-server" containerID="cri-o://be221ced17260b09f116c0341d02401ac2eb2616beec5d32e11a38f792fd06bd" gracePeriod=2 Jan 29 15:50:44 crc kubenswrapper[4835]: I0129 15:50:44.688053 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9p89g" podStartSLOduration=2.010106619 podStartE2EDuration="7.688027073s" podCreationTimestamp="2026-01-29 15:50:37 +0000 UTC" firstStartedPulling="2026-01-29 15:50:38.329839685 +0000 UTC m=+1340.522883339" lastFinishedPulling="2026-01-29 15:50:44.007760129 +0000 UTC m=+1346.200803793" observedRunningTime="2026-01-29 15:50:44.681564392 +0000 UTC m=+1346.874608046" watchObservedRunningTime="2026-01-29 15:50:44.688027073 +0000 UTC m=+1346.881070727" Jan 29 15:50:45 crc kubenswrapper[4835]: I0129 15:50:45.679751 4835 generic.go:334] "Generic (PLEG): container finished" podID="8a0f14f9-9811-445b-a3ce-d1067aa9d051" containerID="be221ced17260b09f116c0341d02401ac2eb2616beec5d32e11a38f792fd06bd" exitCode=0 Jan 29 15:50:45 crc kubenswrapper[4835]: I0129 15:50:45.679845 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9p89g" event={"ID":"8a0f14f9-9811-445b-a3ce-d1067aa9d051","Type":"ContainerDied","Data":"be221ced17260b09f116c0341d02401ac2eb2616beec5d32e11a38f792fd06bd"} Jan 29 15:50:47 crc kubenswrapper[4835]: I0129 15:50:47.800140 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.315045 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.432535 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlgjq\" (UniqueName: \"kubernetes.io/projected/8a0f14f9-9811-445b-a3ce-d1067aa9d051-kube-api-access-hlgjq\") pod \"8a0f14f9-9811-445b-a3ce-d1067aa9d051\" (UID: \"8a0f14f9-9811-445b-a3ce-d1067aa9d051\") " Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.438605 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0f14f9-9811-445b-a3ce-d1067aa9d051-kube-api-access-hlgjq" (OuterVolumeSpecName: "kube-api-access-hlgjq") pod "8a0f14f9-9811-445b-a3ce-d1067aa9d051" (UID: "8a0f14f9-9811-445b-a3ce-d1067aa9d051"). InnerVolumeSpecName "kube-api-access-hlgjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.536540 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlgjq\" (UniqueName: \"kubernetes.io/projected/8a0f14f9-9811-445b-a3ce-d1067aa9d051-kube-api-access-hlgjq\") on node \"crc\" DevicePath \"\"" Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.704117 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9p89g" event={"ID":"8a0f14f9-9811-445b-a3ce-d1067aa9d051","Type":"ContainerDied","Data":"d7ace111fce24661f45e456e6ae9eabc7620b1b48e377fe52037f7baaed6e015"} Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.704180 4835 scope.go:117] "RemoveContainer" containerID="be221ced17260b09f116c0341d02401ac2eb2616beec5d32e11a38f792fd06bd" Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.704207 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9p89g" Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.730877 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9p89g"] Jan 29 15:50:48 crc kubenswrapper[4835]: I0129 15:50:48.736462 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9p89g"] Jan 29 15:50:49 crc kubenswrapper[4835]: I0129 15:50:49.712315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8n788" event={"ID":"13c4685b-af72-42f0-8ba3-1d3ef0e82c87","Type":"ContainerStarted","Data":"7efe72d52c6090d4705d8127a0e9e2c80fab11bbbdaae2ed5ca431b28589dcfd"} Jan 29 15:50:49 crc kubenswrapper[4835]: I0129 15:50:49.732433 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8n788" podStartSLOduration=1.901850832 podStartE2EDuration="6.732413027s" podCreationTimestamp="2026-01-29 15:50:43 +0000 UTC" firstStartedPulling="2026-01-29 15:50:44.449067148 +0000 UTC m=+1346.642110802" lastFinishedPulling="2026-01-29 15:50:49.279629343 +0000 UTC m=+1351.472672997" observedRunningTime="2026-01-29 15:50:49.72810515 +0000 UTC m=+1351.921148814" watchObservedRunningTime="2026-01-29 15:50:49.732413027 +0000 UTC m=+1351.925456681" Jan 29 15:50:50 crc kubenswrapper[4835]: I0129 15:50:50.501542 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0f14f9-9811-445b-a3ce-d1067aa9d051" path="/var/lib/kubelet/pods/8a0f14f9-9811-445b-a3ce-d1067aa9d051/volumes" Jan 29 15:50:51 crc kubenswrapper[4835]: I0129 15:50:51.614108 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:50:51 crc kubenswrapper[4835]: I0129 15:50:51.614174 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:50:53 crc kubenswrapper[4835]: I0129 15:50:53.782959 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:53 crc kubenswrapper[4835]: I0129 15:50:53.783313 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:53 crc kubenswrapper[4835]: I0129 15:50:53.809701 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:50:54 crc kubenswrapper[4835]: I0129 15:50:54.795588 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8n788" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.293199 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv"] Jan 29 15:51:01 crc kubenswrapper[4835]: E0129 15:51:01.294008 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0f14f9-9811-445b-a3ce-d1067aa9d051" containerName="registry-server" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.294020 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0f14f9-9811-445b-a3ce-d1067aa9d051" containerName="registry-server" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.294133 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0f14f9-9811-445b-a3ce-d1067aa9d051" containerName="registry-server" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.294903 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.297510 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jlcmk" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.308329 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv"] Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.330111 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7fw\" (UniqueName: \"kubernetes.io/projected/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-kube-api-access-tf7fw\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.330247 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.330298 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.431697 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.431779 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.431827 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf7fw\" (UniqueName: \"kubernetes.io/projected/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-kube-api-access-tf7fw\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.432264 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.432483 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.455870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf7fw\" (UniqueName: \"kubernetes.io/projected/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-kube-api-access-tf7fw\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.622947 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:01 crc kubenswrapper[4835]: I0129 15:51:01.850210 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv"] Jan 29 15:51:01 crc kubenswrapper[4835]: W0129 15:51:01.864656 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdc8ed3_9c40_4e5c_81c2_be42a9bea0ff.slice/crio-a854aa3a9bfeaad13d1bb6002b4637262835fd66b85f817bee134c6413660d32 WatchSource:0}: Error finding container a854aa3a9bfeaad13d1bb6002b4637262835fd66b85f817bee134c6413660d32: Status 404 returned error can't find the container with id a854aa3a9bfeaad13d1bb6002b4637262835fd66b85f817bee134c6413660d32 Jan 29 15:51:02 crc kubenswrapper[4835]: I0129 15:51:02.799742 4835 generic.go:334] "Generic (PLEG): container finished" podID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerID="0d8da3e1d1a4a8da702b9242c7dae222b65c45cb582834feca59cbfee618965e" exitCode=0 Jan 29 15:51:02 crc kubenswrapper[4835]: I0129 15:51:02.799801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" event={"ID":"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff","Type":"ContainerDied","Data":"0d8da3e1d1a4a8da702b9242c7dae222b65c45cb582834feca59cbfee618965e"} Jan 29 15:51:02 crc kubenswrapper[4835]: I0129 15:51:02.799853 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" event={"ID":"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff","Type":"ContainerStarted","Data":"a854aa3a9bfeaad13d1bb6002b4637262835fd66b85f817bee134c6413660d32"} Jan 29 15:51:04 crc kubenswrapper[4835]: I0129 15:51:04.830574 4835 generic.go:334] "Generic (PLEG): container finished" podID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerID="b634ba680364231db6055f3f9ce462f6608c8e7b59da72a759a8495a1b66133c" exitCode=0 Jan 29 15:51:04 crc kubenswrapper[4835]: I0129 15:51:04.830637 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" event={"ID":"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff","Type":"ContainerDied","Data":"b634ba680364231db6055f3f9ce462f6608c8e7b59da72a759a8495a1b66133c"} Jan 29 15:51:05 crc kubenswrapper[4835]: I0129 15:51:05.837977 4835 generic.go:334] "Generic (PLEG): container finished" podID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerID="2a9f1f2bb64e75003bc17899c930a0caffe3f9e87db85a2b09be5a605c122305" exitCode=0 Jan 29 15:51:05 crc kubenswrapper[4835]: I0129 15:51:05.838017 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" event={"ID":"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff","Type":"ContainerDied","Data":"2a9f1f2bb64e75003bc17899c930a0caffe3f9e87db85a2b09be5a605c122305"} Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.140585 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.302398 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-bundle\") pod \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.302566 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf7fw\" (UniqueName: \"kubernetes.io/projected/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-kube-api-access-tf7fw\") pod \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.302730 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-util\") pod \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\" (UID: \"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff\") " Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.303175 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-bundle" (OuterVolumeSpecName: "bundle") pod "2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" (UID: "2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.310207 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-kube-api-access-tf7fw" (OuterVolumeSpecName: "kube-api-access-tf7fw") pod "2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" (UID: "2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff"). InnerVolumeSpecName "kube-api-access-tf7fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.404054 4835 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.404090 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf7fw\" (UniqueName: \"kubernetes.io/projected/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-kube-api-access-tf7fw\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.857283 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" event={"ID":"2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff","Type":"ContainerDied","Data":"a854aa3a9bfeaad13d1bb6002b4637262835fd66b85f817bee134c6413660d32"} Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.857340 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a854aa3a9bfeaad13d1bb6002b4637262835fd66b85f817bee134c6413660d32" Jan 29 15:51:07 crc kubenswrapper[4835]: I0129 15:51:07.857400 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv" Jan 29 15:51:08 crc kubenswrapper[4835]: I0129 15:51:08.008113 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-util" (OuterVolumeSpecName: "util") pod "2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" (UID: "2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:51:08 crc kubenswrapper[4835]: I0129 15:51:08.011393 4835 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff-util\") on node \"crc\" DevicePath \"\"" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.593979 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l"] Jan 29 15:51:11 crc kubenswrapper[4835]: E0129 15:51:11.594836 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="pull" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.594852 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="pull" Jan 29 15:51:11 crc kubenswrapper[4835]: E0129 15:51:11.594875 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="util" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.594883 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="util" Jan 29 15:51:11 crc kubenswrapper[4835]: E0129 15:51:11.594893 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="extract" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.594901 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="extract" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.595054 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff" containerName="extract" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.595613 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.599075 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-2tjfr" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.617316 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l"] Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.760220 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grjnd\" (UniqueName: \"kubernetes.io/projected/c25d2746-9c15-4529-9de9-a62bf919ccce-kube-api-access-grjnd\") pod \"openstack-operator-controller-init-757f46c65d-66f9l\" (UID: \"c25d2746-9c15-4529-9de9-a62bf919ccce\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.861071 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grjnd\" (UniqueName: \"kubernetes.io/projected/c25d2746-9c15-4529-9de9-a62bf919ccce-kube-api-access-grjnd\") pod \"openstack-operator-controller-init-757f46c65d-66f9l\" (UID: \"c25d2746-9c15-4529-9de9-a62bf919ccce\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.884390 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grjnd\" (UniqueName: \"kubernetes.io/projected/c25d2746-9c15-4529-9de9-a62bf919ccce-kube-api-access-grjnd\") pod \"openstack-operator-controller-init-757f46c65d-66f9l\" (UID: \"c25d2746-9c15-4529-9de9-a62bf919ccce\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:11 crc kubenswrapper[4835]: I0129 15:51:11.915262 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:12 crc kubenswrapper[4835]: I0129 15:51:12.341939 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l"] Jan 29 15:51:12 crc kubenswrapper[4835]: I0129 15:51:12.906654 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" event={"ID":"c25d2746-9c15-4529-9de9-a62bf919ccce","Type":"ContainerStarted","Data":"ba662bf3e264419bbb69a97a2ce9165c88c7f5aec73e4ff05e0ba09c4feecc78"} Jan 29 15:51:19 crc kubenswrapper[4835]: I0129 15:51:19.964750 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" event={"ID":"c25d2746-9c15-4529-9de9-a62bf919ccce","Type":"ContainerStarted","Data":"f47923261185cd0d2d47cd008d484740f5eb9747f3f6e19e4bf13800c09c3cf1"} Jan 29 15:51:19 crc kubenswrapper[4835]: I0129 15:51:19.965938 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:20 crc kubenswrapper[4835]: I0129 15:51:20.019604 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" podStartSLOduration=2.565803019 podStartE2EDuration="9.019588233s" podCreationTimestamp="2026-01-29 15:51:11 +0000 UTC" firstStartedPulling="2026-01-29 15:51:12.347417051 +0000 UTC m=+1374.540460705" lastFinishedPulling="2026-01-29 15:51:18.801202265 +0000 UTC m=+1380.994245919" observedRunningTime="2026-01-29 15:51:20.016740992 +0000 UTC m=+1382.209784646" watchObservedRunningTime="2026-01-29 15:51:20.019588233 +0000 UTC m=+1382.212631887" Jan 29 15:51:21 crc kubenswrapper[4835]: I0129 15:51:21.614789 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:51:21 crc kubenswrapper[4835]: I0129 15:51:21.614853 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:51:21 crc kubenswrapper[4835]: I0129 15:51:21.614899 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:51:21 crc kubenswrapper[4835]: I0129 15:51:21.615543 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b0dd5999a96771f391d295e38b4e02038c721d5806f4dd1294274ca9a5198a3"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:51:21 crc kubenswrapper[4835]: I0129 15:51:21.615606 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://4b0dd5999a96771f391d295e38b4e02038c721d5806f4dd1294274ca9a5198a3" gracePeriod=600 Jan 29 15:51:23 crc kubenswrapper[4835]: I0129 15:51:23.193634 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="4b0dd5999a96771f391d295e38b4e02038c721d5806f4dd1294274ca9a5198a3" exitCode=0 Jan 29 15:51:23 crc kubenswrapper[4835]: I0129 15:51:23.193664 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"4b0dd5999a96771f391d295e38b4e02038c721d5806f4dd1294274ca9a5198a3"} Jan 29 15:51:23 crc kubenswrapper[4835]: I0129 15:51:23.194036 4835 scope.go:117] "RemoveContainer" containerID="e25e908e6d3f7cc11b72ae93f34a777b134a876ca2aa55bbcb30b6d39d1bab0b" Jan 29 15:51:24 crc kubenswrapper[4835]: I0129 15:51:24.200573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085"} Jan 29 15:51:31 crc kubenswrapper[4835]: I0129 15:51:31.918550 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-66f9l" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.334054 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.335509 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.341561 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dnj5d" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.342353 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.343607 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.347351 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dk7pz" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.354896 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.363395 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.390564 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.392109 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.396820 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-n485p" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.436672 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.442542 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.443391 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.446741 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.447592 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.448904 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8pfk7" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.450104 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j267t" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.458041 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.458790 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.461076 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lwt4j" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.479538 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.484532 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.485389 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.522347 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.523313 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.537436 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.538391 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.542081 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.542108 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vr26c" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.543809 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rccr\" (UniqueName: \"kubernetes.io/projected/7cefb6c7-0870-48df-8246-c69e8cbf82c6-kube-api-access-9rccr\") pod \"horizon-operator-controller-manager-5fb775575f-8c998\" (UID: \"7cefb6c7-0870-48df-8246-c69e8cbf82c6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.543880 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssnw\" (UniqueName: \"kubernetes.io/projected/53c52d1f-17be-421f-8f5e-73cf6aefc28d-kube-api-access-fssnw\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fq8mf\" (UID: \"53c52d1f-17be-421f-8f5e-73cf6aefc28d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.543932 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96p4j\" (UniqueName: \"kubernetes.io/projected/35afdcf5-a791-4fb2-aeea-0200ff95edf5-kube-api-access-96p4j\") pod \"cinder-operator-controller-manager-8d874c8fc-5n2pr\" (UID: \"35afdcf5-a791-4fb2-aeea-0200ff95edf5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.543975 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qs4\" (UniqueName: \"kubernetes.io/projected/5b5ab528-bcc6-47f2-a7f0-798dc4a09acc-kube-api-access-m5qs4\") pod \"heat-operator-controller-manager-69d6db494d-x6zhz\" (UID: \"5b5ab528-bcc6-47f2-a7f0-798dc4a09acc\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.544048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kd5\" (UniqueName: \"kubernetes.io/projected/21915d77-1cfb-4c42-827a-a918f63224b5-kube-api-access-82kd5\") pod \"designate-operator-controller-manager-6d9697b7f4-ftjdl\" (UID: \"21915d77-1cfb-4c42-827a-a918f63224b5\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.544094 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wcg\" (UniqueName: \"kubernetes.io/projected/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-kube-api-access-c9wcg\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.544122 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfb2v\" (UniqueName: \"kubernetes.io/projected/3276caaa-e664-4dff-961d-0a4121d4c6a1-kube-api-access-mfb2v\") pod \"glance-operator-controller-manager-8886f4c47-jgc8x\" (UID: \"3276caaa-e664-4dff-961d-0a4121d4c6a1\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.544250 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.544288 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrjc\" (UniqueName: \"kubernetes.io/projected/b19d416a-a9ff-435d-bf52-edd87d12da46-kube-api-access-zsrjc\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-bm5sf\" (UID: \"b19d416a-a9ff-435d-bf52-edd87d12da46\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.549880 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lhgh9" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.577532 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.590869 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.592180 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.595630 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vv85q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.620514 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.621220 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.625766 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m7k9g" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.637341 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645012 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645055 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsrjc\" (UniqueName: \"kubernetes.io/projected/b19d416a-a9ff-435d-bf52-edd87d12da46-kube-api-access-zsrjc\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-bm5sf\" (UID: \"b19d416a-a9ff-435d-bf52-edd87d12da46\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645098 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rccr\" (UniqueName: \"kubernetes.io/projected/7cefb6c7-0870-48df-8246-c69e8cbf82c6-kube-api-access-9rccr\") pod \"horizon-operator-controller-manager-5fb775575f-8c998\" (UID: \"7cefb6c7-0870-48df-8246-c69e8cbf82c6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645126 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fssnw\" (UniqueName: \"kubernetes.io/projected/53c52d1f-17be-421f-8f5e-73cf6aefc28d-kube-api-access-fssnw\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fq8mf\" (UID: \"53c52d1f-17be-421f-8f5e-73cf6aefc28d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645149 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96p4j\" (UniqueName: \"kubernetes.io/projected/35afdcf5-a791-4fb2-aeea-0200ff95edf5-kube-api-access-96p4j\") pod \"cinder-operator-controller-manager-8d874c8fc-5n2pr\" (UID: \"35afdcf5-a791-4fb2-aeea-0200ff95edf5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645180 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qs4\" (UniqueName: \"kubernetes.io/projected/5b5ab528-bcc6-47f2-a7f0-798dc4a09acc-kube-api-access-m5qs4\") pod \"heat-operator-controller-manager-69d6db494d-x6zhz\" (UID: \"5b5ab528-bcc6-47f2-a7f0-798dc4a09acc\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645223 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82kd5\" (UniqueName: \"kubernetes.io/projected/21915d77-1cfb-4c42-827a-a918f63224b5-kube-api-access-82kd5\") pod \"designate-operator-controller-manager-6d9697b7f4-ftjdl\" (UID: \"21915d77-1cfb-4c42-827a-a918f63224b5\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645246 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wcg\" (UniqueName: \"kubernetes.io/projected/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-kube-api-access-c9wcg\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645271 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfb2v\" (UniqueName: \"kubernetes.io/projected/3276caaa-e664-4dff-961d-0a4121d4c6a1-kube-api-access-mfb2v\") pod \"glance-operator-controller-manager-8886f4c47-jgc8x\" (UID: \"3276caaa-e664-4dff-961d-0a4121d4c6a1\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645302 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swf4x\" (UniqueName: \"kubernetes.io/projected/514ee3fd-e758-471d-8378-b5b1c566ba36-kube-api-access-swf4x\") pod \"keystone-operator-controller-manager-84f48565d4-pvll8\" (UID: \"514ee3fd-e758-471d-8378-b5b1c566ba36\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.645333 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6lqz\" (UniqueName: \"kubernetes.io/projected/0e32b624-7997-4c99-a4d9-dff4da47132e-kube-api-access-n6lqz\") pod \"manila-operator-controller-manager-7dd968899f-6n6sr\" (UID: \"0e32b624-7997-4c99-a4d9-dff4da47132e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:51:55 crc kubenswrapper[4835]: E0129 15:51:55.645534 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:55 crc kubenswrapper[4835]: E0129 15:51:55.645585 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert podName:adcf669a-6e0c-4cd6-a753-6ab8a5f15a25 nodeName:}" failed. No retries permitted until 2026-01-29 15:51:56.145566418 +0000 UTC m=+1418.338610072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert") pod "infra-operator-controller-manager-79955696d6-t4z4q" (UID: "adcf669a-6e0c-4cd6-a753-6ab8a5f15a25") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.652519 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.653541 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.671632 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dc82q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.709212 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qs4\" (UniqueName: \"kubernetes.io/projected/5b5ab528-bcc6-47f2-a7f0-798dc4a09acc-kube-api-access-m5qs4\") pod \"heat-operator-controller-manager-69d6db494d-x6zhz\" (UID: \"5b5ab528-bcc6-47f2-a7f0-798dc4a09acc\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.713277 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.724098 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96p4j\" (UniqueName: \"kubernetes.io/projected/35afdcf5-a791-4fb2-aeea-0200ff95edf5-kube-api-access-96p4j\") pod \"cinder-operator-controller-manager-8d874c8fc-5n2pr\" (UID: \"35afdcf5-a791-4fb2-aeea-0200ff95edf5\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.742142 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfb2v\" (UniqueName: \"kubernetes.io/projected/3276caaa-e664-4dff-961d-0a4121d4c6a1-kube-api-access-mfb2v\") pod \"glance-operator-controller-manager-8886f4c47-jgc8x\" (UID: \"3276caaa-e664-4dff-961d-0a4121d4c6a1\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.745220 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.745953 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.746937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rccr\" (UniqueName: \"kubernetes.io/projected/7cefb6c7-0870-48df-8246-c69e8cbf82c6-kube-api-access-9rccr\") pod \"horizon-operator-controller-manager-5fb775575f-8c998\" (UID: \"7cefb6c7-0870-48df-8246-c69e8cbf82c6\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.747424 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swf4x\" (UniqueName: \"kubernetes.io/projected/514ee3fd-e758-471d-8378-b5b1c566ba36-kube-api-access-swf4x\") pod \"keystone-operator-controller-manager-84f48565d4-pvll8\" (UID: \"514ee3fd-e758-471d-8378-b5b1c566ba36\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.747447 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6lqz\" (UniqueName: \"kubernetes.io/projected/0e32b624-7997-4c99-a4d9-dff4da47132e-kube-api-access-n6lqz\") pod \"manila-operator-controller-manager-7dd968899f-6n6sr\" (UID: \"0e32b624-7997-4c99-a4d9-dff4da47132e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.747484 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9kh\" (UniqueName: \"kubernetes.io/projected/41e42cf7-1c34-41ad-ab8f-0fd362f01f58-kube-api-access-7v9kh\") pod \"mariadb-operator-controller-manager-67bf948998-n6nf2\" (UID: \"41e42cf7-1c34-41ad-ab8f-0fd362f01f58\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.757079 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wcg\" (UniqueName: \"kubernetes.io/projected/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-kube-api-access-c9wcg\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.763451 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-xqgw2" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.768312 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsrjc\" (UniqueName: \"kubernetes.io/projected/b19d416a-a9ff-435d-bf52-edd87d12da46-kube-api-access-zsrjc\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-bm5sf\" (UID: \"b19d416a-a9ff-435d-bf52-edd87d12da46\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.770313 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fssnw\" (UniqueName: \"kubernetes.io/projected/53c52d1f-17be-421f-8f5e-73cf6aefc28d-kube-api-access-fssnw\") pod \"ironic-operator-controller-manager-5f4b8bd54d-fq8mf\" (UID: \"53c52d1f-17be-421f-8f5e-73cf6aefc28d\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.772908 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.793666 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.814037 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.830398 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.831517 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6lqz\" (UniqueName: \"kubernetes.io/projected/0e32b624-7997-4c99-a4d9-dff4da47132e-kube-api-access-n6lqz\") pod \"manila-operator-controller-manager-7dd968899f-6n6sr\" (UID: \"0e32b624-7997-4c99-a4d9-dff4da47132e\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.836051 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swf4x\" (UniqueName: \"kubernetes.io/projected/514ee3fd-e758-471d-8378-b5b1c566ba36-kube-api-access-swf4x\") pod \"keystone-operator-controller-manager-84f48565d4-pvll8\" (UID: \"514ee3fd-e758-471d-8378-b5b1c566ba36\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.856198 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v9kh\" (UniqueName: \"kubernetes.io/projected/41e42cf7-1c34-41ad-ab8f-0fd362f01f58-kube-api-access-7v9kh\") pod \"mariadb-operator-controller-manager-67bf948998-n6nf2\" (UID: \"41e42cf7-1c34-41ad-ab8f-0fd362f01f58\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.856261 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82kd5\" (UniqueName: \"kubernetes.io/projected/21915d77-1cfb-4c42-827a-a918f63224b5-kube-api-access-82kd5\") pod \"designate-operator-controller-manager-6d9697b7f4-ftjdl\" (UID: \"21915d77-1cfb-4c42-827a-a918f63224b5\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.867651 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.887182 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.895385 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.918108 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r"] Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.920080 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.921935 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.934396 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-qpk7p" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.935398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v9kh\" (UniqueName: \"kubernetes.io/projected/41e42cf7-1c34-41ad-ab8f-0fd362f01f58-kube-api-access-7v9kh\") pod \"mariadb-operator-controller-manager-67bf948998-n6nf2\" (UID: \"41e42cf7-1c34-41ad-ab8f-0fd362f01f58\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.955711 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.956102 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.960855 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7dj\" (UniqueName: \"kubernetes.io/projected/d82b7063-6a29-4b42-807d-f49f8881be3e-kube-api-access-2t7dj\") pod \"nova-operator-controller-manager-55bff696bd-g9pk9\" (UID: \"d82b7063-6a29-4b42-807d-f49f8881be3e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.960920 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4pj\" (UniqueName: \"kubernetes.io/projected/0337a106-8b4b-41c0-824c-394d50c9ad6d-kube-api-access-8z4pj\") pod \"neutron-operator-controller-manager-585dbc889-pbv9r\" (UID: \"0337a106-8b4b-41c0-824c-394d50c9ad6d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.966743 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:51:55 crc kubenswrapper[4835]: I0129 15:51:55.970640 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.022157 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.033644 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.065085 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7dj\" (UniqueName: \"kubernetes.io/projected/d82b7063-6a29-4b42-807d-f49f8881be3e-kube-api-access-2t7dj\") pod \"nova-operator-controller-manager-55bff696bd-g9pk9\" (UID: \"d82b7063-6a29-4b42-807d-f49f8881be3e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.065179 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z4pj\" (UniqueName: \"kubernetes.io/projected/0337a106-8b4b-41c0-824c-394d50c9ad6d-kube-api-access-8z4pj\") pod \"neutron-operator-controller-manager-585dbc889-pbv9r\" (UID: \"0337a106-8b4b-41c0-824c-394d50c9ad6d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.096939 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7dj\" (UniqueName: \"kubernetes.io/projected/d82b7063-6a29-4b42-807d-f49f8881be3e-kube-api-access-2t7dj\") pod \"nova-operator-controller-manager-55bff696bd-g9pk9\" (UID: \"d82b7063-6a29-4b42-807d-f49f8881be3e\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.101557 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z4pj\" (UniqueName: \"kubernetes.io/projected/0337a106-8b4b-41c0-824c-394d50c9ad6d-kube-api-access-8z4pj\") pod \"neutron-operator-controller-manager-585dbc889-pbv9r\" (UID: \"0337a106-8b4b-41c0-824c-394d50c9ad6d\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.121439 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.122521 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.128139 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-wvhgk" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.128338 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.155727 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.156602 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.168014 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.168215 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.168257 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert podName:adcf669a-6e0c-4cd6-a753-6ab8a5f15a25 nodeName:}" failed. No retries permitted until 2026-01-29 15:51:57.168243814 +0000 UTC m=+1419.361287468 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert") pod "infra-operator-controller-manager-79955696d6-t4z4q" (UID: "adcf669a-6e0c-4cd6-a753-6ab8a5f15a25") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.170181 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-kj2dg" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.171811 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.184249 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.200260 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.201318 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.207115 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-j9948" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.217779 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.233556 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.263248 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.270211 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wbz\" (UniqueName: \"kubernetes.io/projected/9674c18b-3676-4101-b206-5048d14862af-kube-api-access-k8wbz\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.270293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.270403 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccndh\" (UniqueName: \"kubernetes.io/projected/c13709c8-6829-4e4f-9acd-46aeaa147049-kube-api-access-ccndh\") pod \"octavia-operator-controller-manager-6687f8d877-dnwhm\" (UID: \"c13709c8-6829-4e4f-9acd-46aeaa147049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.271080 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.277143 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.287713 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7kpcd" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.306952 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.346729 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.348266 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.351346 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-gxtk5" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372220 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccndh\" (UniqueName: \"kubernetes.io/projected/c13709c8-6829-4e4f-9acd-46aeaa147049-kube-api-access-ccndh\") pod \"octavia-operator-controller-manager-6687f8d877-dnwhm\" (UID: \"c13709c8-6829-4e4f-9acd-46aeaa147049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wbz\" (UniqueName: \"kubernetes.io/projected/9674c18b-3676-4101-b206-5048d14862af-kube-api-access-k8wbz\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372310 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-przkq\" (UniqueName: \"kubernetes.io/projected/6716e375-4195-40bf-a20b-2a716135e4ef-kube-api-access-przkq\") pod \"ovn-operator-controller-manager-788c46999f-xmd6m\" (UID: \"6716e375-4195-40bf-a20b-2a716135e4ef\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372334 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwzc\" (UniqueName: \"kubernetes.io/projected/a333de09-dfc0-446d-85d1-08f31bf6c540-kube-api-access-rzwzc\") pod \"placement-operator-controller-manager-5b964cf4cd-gvw9q\" (UID: \"a333de09-dfc0-446d-85d1-08f31bf6c540\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372381 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxj55\" (UniqueName: \"kubernetes.io/projected/d3d2d6b0-b259-4030-a685-d9e984324c58-kube-api-access-fxj55\") pod \"swift-operator-controller-manager-68fc8c869-tlvw5\" (UID: \"d3d2d6b0-b259-4030-a685-d9e984324c58\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.372833 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.372878 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert podName:9674c18b-3676-4101-b206-5048d14862af nodeName:}" failed. No retries permitted until 2026-01-29 15:51:56.872864823 +0000 UTC m=+1419.065908477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" (UID: "9674c18b-3676-4101-b206-5048d14862af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.372992 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.376746 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.386560 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-94cnt" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.410936 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wbz\" (UniqueName: \"kubernetes.io/projected/9674c18b-3676-4101-b206-5048d14862af-kube-api-access-k8wbz\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.425038 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccndh\" (UniqueName: \"kubernetes.io/projected/c13709c8-6829-4e4f-9acd-46aeaa147049-kube-api-access-ccndh\") pod \"octavia-operator-controller-manager-6687f8d877-dnwhm\" (UID: \"c13709c8-6829-4e4f-9acd-46aeaa147049\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.465905 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.489260 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-przkq\" (UniqueName: \"kubernetes.io/projected/6716e375-4195-40bf-a20b-2a716135e4ef-kube-api-access-przkq\") pod \"ovn-operator-controller-manager-788c46999f-xmd6m\" (UID: \"6716e375-4195-40bf-a20b-2a716135e4ef\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.489344 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzwzc\" (UniqueName: \"kubernetes.io/projected/a333de09-dfc0-446d-85d1-08f31bf6c540-kube-api-access-rzwzc\") pod \"placement-operator-controller-manager-5b964cf4cd-gvw9q\" (UID: \"a333de09-dfc0-446d-85d1-08f31bf6c540\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.489369 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxj55\" (UniqueName: \"kubernetes.io/projected/d3d2d6b0-b259-4030-a685-d9e984324c58-kube-api-access-fxj55\") pod \"swift-operator-controller-manager-68fc8c869-tlvw5\" (UID: \"d3d2d6b0-b259-4030-a685-d9e984324c58\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.489406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv228\" (UniqueName: \"kubernetes.io/projected/1b169eb7-1896-40cd-aefd-fc7b69b2bbdf-kube-api-access-rv228\") pod \"telemetry-operator-controller-manager-64b5b76f97-qsjxl\" (UID: \"1b169eb7-1896-40cd-aefd-fc7b69b2bbdf\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.492405 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.493277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.512221 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v4d46" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.519201 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzwzc\" (UniqueName: \"kubernetes.io/projected/a333de09-dfc0-446d-85d1-08f31bf6c540-kube-api-access-rzwzc\") pod \"placement-operator-controller-manager-5b964cf4cd-gvw9q\" (UID: \"a333de09-dfc0-446d-85d1-08f31bf6c540\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.523911 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.525896 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-przkq\" (UniqueName: \"kubernetes.io/projected/6716e375-4195-40bf-a20b-2a716135e4ef-kube-api-access-przkq\") pod \"ovn-operator-controller-manager-788c46999f-xmd6m\" (UID: \"6716e375-4195-40bf-a20b-2a716135e4ef\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.532387 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-rmmtl"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.535292 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.543954 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxj55\" (UniqueName: \"kubernetes.io/projected/d3d2d6b0-b259-4030-a685-d9e984324c58-kube-api-access-fxj55\") pod \"swift-operator-controller-manager-68fc8c869-tlvw5\" (UID: \"d3d2d6b0-b259-4030-a685-d9e984324c58\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.549426 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mkmjl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.555803 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.570346 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-rmmtl"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.590595 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv228\" (UniqueName: \"kubernetes.io/projected/1b169eb7-1896-40cd-aefd-fc7b69b2bbdf-kube-api-access-rv228\") pod \"telemetry-operator-controller-manager-64b5b76f97-qsjxl\" (UID: \"1b169eb7-1896-40cd-aefd-fc7b69b2bbdf\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.624451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv228\" (UniqueName: \"kubernetes.io/projected/1b169eb7-1896-40cd-aefd-fc7b69b2bbdf-kube-api-access-rv228\") pod \"telemetry-operator-controller-manager-64b5b76f97-qsjxl\" (UID: \"1b169eb7-1896-40cd-aefd-fc7b69b2bbdf\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.626630 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.644751 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.645833 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.648808 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.649126 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.649310 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-69pbs" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.650945 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.656853 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.675197 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.676259 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.680924 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-42rf4" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.690011 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.691686 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqk5v\" (UniqueName: \"kubernetes.io/projected/16a82779-3964-4e89-ab3f-162ab6ea8141-kube-api-access-wqk5v\") pod \"watcher-operator-controller-manager-564965969-rmmtl\" (UID: \"16a82779-3964-4e89-ab3f-162ab6ea8141\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.691725 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92trb\" (UniqueName: \"kubernetes.io/projected/2df55346-97f6-4abe-b4f5-4044995f7c2c-kube-api-access-92trb\") pod \"test-operator-controller-manager-56f8bfcd9f-bcmcn\" (UID: \"2df55346-97f6-4abe-b4f5-4044995f7c2c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.696840 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.762716 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.782629 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.792335 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvz9k\" (UniqueName: \"kubernetes.io/projected/538c332f-1a4e-4bf8-b6be-12aae176c5ab-kube-api-access-xvz9k\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.792382 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.792431 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4d5\" (UniqueName: \"kubernetes.io/projected/a5745c9f-9837-415c-ac71-7138f6e3a3cc-kube-api-access-cp4d5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b57bv\" (UID: \"a5745c9f-9837-415c-ac71-7138f6e3a3cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.792455 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqk5v\" (UniqueName: \"kubernetes.io/projected/16a82779-3964-4e89-ab3f-162ab6ea8141-kube-api-access-wqk5v\") pod \"watcher-operator-controller-manager-564965969-rmmtl\" (UID: \"16a82779-3964-4e89-ab3f-162ab6ea8141\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.792500 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92trb\" (UniqueName: \"kubernetes.io/projected/2df55346-97f6-4abe-b4f5-4044995f7c2c-kube-api-access-92trb\") pod \"test-operator-controller-manager-56f8bfcd9f-bcmcn\" (UID: \"2df55346-97f6-4abe-b4f5-4044995f7c2c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.792518 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.814064 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqk5v\" (UniqueName: \"kubernetes.io/projected/16a82779-3964-4e89-ab3f-162ab6ea8141-kube-api-access-wqk5v\") pod \"watcher-operator-controller-manager-564965969-rmmtl\" (UID: \"16a82779-3964-4e89-ab3f-162ab6ea8141\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.851756 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92trb\" (UniqueName: \"kubernetes.io/projected/2df55346-97f6-4abe-b4f5-4044995f7c2c-kube-api-access-92trb\") pod \"test-operator-controller-manager-56f8bfcd9f-bcmcn\" (UID: \"2df55346-97f6-4abe-b4f5-4044995f7c2c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.877781 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.895756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4d5\" (UniqueName: \"kubernetes.io/projected/a5745c9f-9837-415c-ac71-7138f6e3a3cc-kube-api-access-cp4d5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b57bv\" (UID: \"a5745c9f-9837-415c-ac71-7138f6e3a3cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.895818 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.895859 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.895934 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvz9k\" (UniqueName: \"kubernetes.io/projected/538c332f-1a4e-4bf8-b6be-12aae176c5ab-kube-api-access-xvz9k\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.895958 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.896096 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.896148 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:51:57.396131714 +0000 UTC m=+1419.589175368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "metrics-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.896272 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.896522 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert podName:9674c18b-3676-4101-b206-5048d14862af nodeName:}" failed. No retries permitted until 2026-01-29 15:51:57.896511683 +0000 UTC m=+1420.089555337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" (UID: "9674c18b-3676-4101-b206-5048d14862af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.896548 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: E0129 15:51:56.896680 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:51:57.396670907 +0000 UTC m=+1419.589714551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "webhook-server-cert" not found Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.921329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4d5\" (UniqueName: \"kubernetes.io/projected/a5745c9f-9837-415c-ac71-7138f6e3a3cc-kube-api-access-cp4d5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-b57bv\" (UID: \"a5745c9f-9837-415c-ac71-7138f6e3a3cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.930293 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvz9k\" (UniqueName: \"kubernetes.io/projected/538c332f-1a4e-4bf8-b6be-12aae176c5ab-kube-api-access-xvz9k\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.937791 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.985016 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz"] Jan 29 15:51:56 crc kubenswrapper[4835]: I0129 15:51:56.987828 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" Jan 29 15:51:57 crc kubenswrapper[4835]: W0129 15:51:57.007844 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5ab528_bcc6_47f2_a7f0_798dc4a09acc.slice/crio-3a4a910e7200dbcfb6f107166e5bcb75be06eae065f37371720f4292b020cb78 WatchSource:0}: Error finding container 3a4a910e7200dbcfb6f107166e5bcb75be06eae065f37371720f4292b020cb78: Status 404 returned error can't find the container with id 3a4a910e7200dbcfb6f107166e5bcb75be06eae065f37371720f4292b020cb78 Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.029905 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.175141 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x"] Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.201365 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.201580 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.201629 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert podName:adcf669a-6e0c-4cd6-a753-6ab8a5f15a25 nodeName:}" failed. No retries permitted until 2026-01-29 15:51:59.20161333 +0000 UTC m=+1421.394656984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert") pod "infra-operator-controller-manager-79955696d6-t4z4q" (UID: "adcf669a-6e0c-4cd6-a753-6ab8a5f15a25") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: W0129 15:51:57.307060 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3276caaa_e664_4dff_961d_0a4121d4c6a1.slice/crio-6e92e73bbeef830d97fa1fd26ce36fa063aa9eced36fdf35a2889762de99388d WatchSource:0}: Error finding container 6e92e73bbeef830d97fa1fd26ce36fa063aa9eced36fdf35a2889762de99388d: Status 404 returned error can't find the container with id 6e92e73bbeef830d97fa1fd26ce36fa063aa9eced36fdf35a2889762de99388d Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.405455 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.405619 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.405700 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.405765 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.405779 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:51:58.405753337 +0000 UTC m=+1420.598796991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "webhook-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.405813 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:51:58.405799058 +0000 UTC m=+1420.598842702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "metrics-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.522615 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" event={"ID":"3276caaa-e664-4dff-961d-0a4121d4c6a1","Type":"ContainerStarted","Data":"6e92e73bbeef830d97fa1fd26ce36fa063aa9eced36fdf35a2889762de99388d"} Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.543448 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" event={"ID":"5b5ab528-bcc6-47f2-a7f0-798dc4a09acc","Type":"ContainerStarted","Data":"3a4a910e7200dbcfb6f107166e5bcb75be06eae065f37371720f4292b020cb78"} Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.818237 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r"] Jan 29 15:51:57 crc kubenswrapper[4835]: W0129 15:51:57.819640 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0337a106_8b4b_41c0_824c_394d50c9ad6d.slice/crio-a69b03238dc1451c384b6f608ceb310a28219b7a9f286e73e0acd1e86e733fc0 WatchSource:0}: Error finding container a69b03238dc1451c384b6f608ceb310a28219b7a9f286e73e0acd1e86e733fc0: Status 404 returned error can't find the container with id a69b03238dc1451c384b6f608ceb310a28219b7a9f286e73e0acd1e86e733fc0 Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.838702 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl"] Jan 29 15:51:57 crc kubenswrapper[4835]: W0129 15:51:57.846789 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21915d77_1cfb_4c42_827a_a918f63224b5.slice/crio-69e0146e932745a42cb5d0d7dc885d26428a2f961586d87a6a16033c275b2561 WatchSource:0}: Error finding container 69e0146e932745a42cb5d0d7dc885d26428a2f961586d87a6a16033c275b2561: Status 404 returned error can't find the container with id 69e0146e932745a42cb5d0d7dc885d26428a2f961586d87a6a16033c275b2561 Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.861954 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998"] Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.918919 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.919102 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: E0129 15:51:57.919153 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert podName:9674c18b-3676-4101-b206-5048d14862af nodeName:}" failed. No retries permitted until 2026-01-29 15:51:59.919138063 +0000 UTC m=+1422.112181717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" (UID: "9674c18b-3676-4101-b206-5048d14862af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:57 crc kubenswrapper[4835]: I0129 15:51:57.948271 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9"] Jan 29 15:51:57 crc kubenswrapper[4835]: W0129 15:51:57.987745 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53c52d1f_17be_421f_8f5e_73cf6aefc28d.slice/crio-c8d2cff11afa36b01888a1d1b743f88c3bbcdd49d711991b83293cdcab2f6318 WatchSource:0}: Error finding container c8d2cff11afa36b01888a1d1b743f88c3bbcdd49d711991b83293cdcab2f6318: Status 404 returned error can't find the container with id c8d2cff11afa36b01888a1d1b743f88c3bbcdd49d711991b83293cdcab2f6318 Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.004719 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.015491 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.029451 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.053656 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.059541 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.071343 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.169060 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.176175 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf"] Jan 29 15:51:58 crc kubenswrapper[4835]: W0129 15:51:58.177949 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35afdcf5_a791_4fb2_aeea_0200ff95edf5.slice/crio-112e78c9142caa2cebbc51b99c14ed285b73c1bf66f5b80679d8e7822f504e8f WatchSource:0}: Error finding container 112e78c9142caa2cebbc51b99c14ed285b73c1bf66f5b80679d8e7822f504e8f: Status 404 returned error can't find the container with id 112e78c9142caa2cebbc51b99c14ed285b73c1bf66f5b80679d8e7822f504e8f Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.189620 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96p4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-5n2pr_openstack-operators(35afdcf5-a791-4fb2-aeea-0200ff95edf5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.190848 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" podUID="35afdcf5-a791-4fb2-aeea-0200ff95edf5" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.197010 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl"] Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.202135 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q"] Jan 29 15:51:58 crc kubenswrapper[4835]: W0129 15:51:58.216355 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b169eb7_1896_40cd_aefd_fc7b69b2bbdf.slice/crio-2844d49551e22675df89ddc7a550f0b10d4e2cb8109377358362bd75b3776f07 WatchSource:0}: Error finding container 2844d49551e22675df89ddc7a550f0b10d4e2cb8109377358362bd75b3776f07: Status 404 returned error can't find the container with id 2844d49551e22675df89ddc7a550f0b10d4e2cb8109377358362bd75b3776f07 Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.220625 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m"] Jan 29 15:51:58 crc kubenswrapper[4835]: W0129 15:51:58.220765 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda333de09_dfc0_446d_85d1_08f31bf6c540.slice/crio-6d2a93f596803273f65fce878c129c2daecc14a5a45bc2eba375e990211d943c WatchSource:0}: Error finding container 6d2a93f596803273f65fce878c129c2daecc14a5a45bc2eba375e990211d943c: Status 404 returned error can't find the container with id 6d2a93f596803273f65fce878c129c2daecc14a5a45bc2eba375e990211d943c Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.228072 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv"] Jan 29 15:51:58 crc kubenswrapper[4835]: W0129 15:51:58.229651 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5745c9f_9837_415c_ac71_7138f6e3a3cc.slice/crio-78680fc0b14c161a42373141fe39b07842df2dd5363f7fd8f8ac7d8732bc9ec8 WatchSource:0}: Error finding container 78680fc0b14c161a42373141fe39b07842df2dd5363f7fd8f8ac7d8732bc9ec8: Status 404 returned error can't find the container with id 78680fc0b14c161a42373141fe39b07842df2dd5363f7fd8f8ac7d8732bc9ec8 Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.229839 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rv228,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-qsjxl_openstack-operators(1b169eb7-1896-40cd-aefd-fc7b69b2bbdf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.233863 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" podUID="1b169eb7-1896-40cd-aefd-fc7b69b2bbdf" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.236863 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn"] Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.243911 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cp4d5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-b57bv_openstack-operators(a5745c9f-9837-415c-ac71-7138f6e3a3cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.243911 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-przkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-xmd6m_openstack-operators(6716e375-4195-40bf-a20b-2a716135e4ef): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.243976 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92trb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-bcmcn_openstack-operators(2df55346-97f6-4abe-b4f5-4044995f7c2c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.245703 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" podUID="2df55346-97f6-4abe-b4f5-4044995f7c2c" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.245754 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" podUID="6716e375-4195-40bf-a20b-2a716135e4ef" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.245774 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" podUID="a5745c9f-9837-415c-ac71-7138f6e3a3cc" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.269079 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-rmmtl"] Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.296120 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-rmmtl_openstack-operators(16a82779-3964-4e89-ab3f-162ab6ea8141): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.303785 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" podUID="16a82779-3964-4e89-ab3f-162ab6ea8141" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.426642 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.426788 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.427129 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:52:00.427108865 +0000 UTC m=+1422.620152519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "webhook-server-cert" not found Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.427595 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.427695 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.427733 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:52:00.42772314 +0000 UTC m=+1422.620766794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "metrics-server-cert" not found Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.583254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" event={"ID":"35afdcf5-a791-4fb2-aeea-0200ff95edf5","Type":"ContainerStarted","Data":"112e78c9142caa2cebbc51b99c14ed285b73c1bf66f5b80679d8e7822f504e8f"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.585768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" event={"ID":"0e32b624-7997-4c99-a4d9-dff4da47132e","Type":"ContainerStarted","Data":"7a989ec614eb76b3e0b76f39c060a8041f0d9340568827f8c1d530635bf90e19"} Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.585903 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" podUID="35afdcf5-a791-4fb2-aeea-0200ff95edf5" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.587806 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" event={"ID":"b19d416a-a9ff-435d-bf52-edd87d12da46","Type":"ContainerStarted","Data":"d9b1b6dba50287970a8a1128ef5a4f2620d0369765709c3121d87a79e328c980"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.589053 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" event={"ID":"53c52d1f-17be-421f-8f5e-73cf6aefc28d","Type":"ContainerStarted","Data":"c8d2cff11afa36b01888a1d1b743f88c3bbcdd49d711991b83293cdcab2f6318"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.603504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" event={"ID":"41e42cf7-1c34-41ad-ab8f-0fd362f01f58","Type":"ContainerStarted","Data":"bbb468d18f4de5f71d41174b3a3aab79d9f3dff7605a557f9191ffd1edcb1d42"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.614707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" event={"ID":"a5745c9f-9837-415c-ac71-7138f6e3a3cc","Type":"ContainerStarted","Data":"78680fc0b14c161a42373141fe39b07842df2dd5363f7fd8f8ac7d8732bc9ec8"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.618518 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" event={"ID":"0337a106-8b4b-41c0-824c-394d50c9ad6d","Type":"ContainerStarted","Data":"a69b03238dc1451c384b6f608ceb310a28219b7a9f286e73e0acd1e86e733fc0"} Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.620317 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" podUID="a5745c9f-9837-415c-ac71-7138f6e3a3cc" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.622794 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" event={"ID":"2df55346-97f6-4abe-b4f5-4044995f7c2c","Type":"ContainerStarted","Data":"59fe945892c8b60a1d2983840948b2edde90315abd74ca35abd6005b9ed19656"} Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.624403 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" podUID="2df55346-97f6-4abe-b4f5-4044995f7c2c" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.624961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" event={"ID":"1b169eb7-1896-40cd-aefd-fc7b69b2bbdf","Type":"ContainerStarted","Data":"2844d49551e22675df89ddc7a550f0b10d4e2cb8109377358362bd75b3776f07"} Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.633615 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" podUID="1b169eb7-1896-40cd-aefd-fc7b69b2bbdf" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.641692 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" event={"ID":"514ee3fd-e758-471d-8378-b5b1c566ba36","Type":"ContainerStarted","Data":"fa0b6e95db3e04b06aeb23fada54c6885cc6133085c07341e49250816aab37c1"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.646557 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" event={"ID":"c13709c8-6829-4e4f-9acd-46aeaa147049","Type":"ContainerStarted","Data":"697f5cc79ae2e9862f973d4ce6688a2a6bd6520e86151d1cdbae7a8105dfa0ba"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.650366 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" event={"ID":"d82b7063-6a29-4b42-807d-f49f8881be3e","Type":"ContainerStarted","Data":"816b6cfbd3939261bb648f16e875a0c8832af854adead43e1427435b96da37ce"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.659911 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" event={"ID":"7cefb6c7-0870-48df-8246-c69e8cbf82c6","Type":"ContainerStarted","Data":"ba569c8d36989edcffd028962637038bebe40b1d3cfb29d2018ef208b1b82bb2"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.672766 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" event={"ID":"a333de09-dfc0-446d-85d1-08f31bf6c540","Type":"ContainerStarted","Data":"6d2a93f596803273f65fce878c129c2daecc14a5a45bc2eba375e990211d943c"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.679175 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" event={"ID":"21915d77-1cfb-4c42-827a-a918f63224b5","Type":"ContainerStarted","Data":"69e0146e932745a42cb5d0d7dc885d26428a2f961586d87a6a16033c275b2561"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.687227 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" event={"ID":"d3d2d6b0-b259-4030-a685-d9e984324c58","Type":"ContainerStarted","Data":"fb5c296418d4e93104a1e116d330bc36823e224123ff23cd7700e6e67f001d3a"} Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.696309 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" event={"ID":"6716e375-4195-40bf-a20b-2a716135e4ef","Type":"ContainerStarted","Data":"2153b933ddb96c5ac3a1ec96f02f7e5b3fa527339e9fadc26ac5694d200ce8cc"} Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.698941 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" podUID="6716e375-4195-40bf-a20b-2a716135e4ef" Jan 29 15:51:58 crc kubenswrapper[4835]: I0129 15:51:58.702942 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" event={"ID":"16a82779-3964-4e89-ab3f-162ab6ea8141","Type":"ContainerStarted","Data":"5cc3b062a70fa69b768291fc48c1caad682e234a1c9c98caeb5ba6d796704c02"} Jan 29 15:51:58 crc kubenswrapper[4835]: E0129 15:51:58.704661 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" podUID="16a82779-3964-4e89-ab3f-162ab6ea8141" Jan 29 15:51:59 crc kubenswrapper[4835]: I0129 15:51:59.247480 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.247897 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.247951 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert podName:adcf669a-6e0c-4cd6-a753-6ab8a5f15a25 nodeName:}" failed. No retries permitted until 2026-01-29 15:52:03.247931746 +0000 UTC m=+1425.440975400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert") pod "infra-operator-controller-manager-79955696d6-t4z4q" (UID: "adcf669a-6e0c-4cd6-a753-6ab8a5f15a25") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.742030 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" podUID="35afdcf5-a791-4fb2-aeea-0200ff95edf5" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.742493 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" podUID="2df55346-97f6-4abe-b4f5-4044995f7c2c" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.742538 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" podUID="6716e375-4195-40bf-a20b-2a716135e4ef" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.742578 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" podUID="1b169eb7-1896-40cd-aefd-fc7b69b2bbdf" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.742971 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" podUID="a5745c9f-9837-415c-ac71-7138f6e3a3cc" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.743027 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" podUID="16a82779-3964-4e89-ab3f-162ab6ea8141" Jan 29 15:51:59 crc kubenswrapper[4835]: I0129 15:51:59.974640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.974821 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:51:59 crc kubenswrapper[4835]: E0129 15:51:59.974904 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert podName:9674c18b-3676-4101-b206-5048d14862af nodeName:}" failed. No retries permitted until 2026-01-29 15:52:03.974884752 +0000 UTC m=+1426.167928406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" (UID: "9674c18b-3676-4101-b206-5048d14862af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:52:00 crc kubenswrapper[4835]: I0129 15:52:00.480840 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:00 crc kubenswrapper[4835]: I0129 15:52:00.481002 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:00 crc kubenswrapper[4835]: E0129 15:52:00.481078 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:52:00 crc kubenswrapper[4835]: E0129 15:52:00.481189 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:52:04.481162532 +0000 UTC m=+1426.674206186 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "metrics-server-cert" not found Jan 29 15:52:00 crc kubenswrapper[4835]: E0129 15:52:00.481198 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:52:00 crc kubenswrapper[4835]: E0129 15:52:00.481252 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:52:04.481237524 +0000 UTC m=+1426.674281178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "webhook-server-cert" not found Jan 29 15:52:03 crc kubenswrapper[4835]: I0129 15:52:03.324931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:03 crc kubenswrapper[4835]: E0129 15:52:03.325147 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:52:03 crc kubenswrapper[4835]: E0129 15:52:03.325741 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert podName:adcf669a-6e0c-4cd6-a753-6ab8a5f15a25 nodeName:}" failed. No retries permitted until 2026-01-29 15:52:11.325645807 +0000 UTC m=+1433.518689461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert") pod "infra-operator-controller-manager-79955696d6-t4z4q" (UID: "adcf669a-6e0c-4cd6-a753-6ab8a5f15a25") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:52:04 crc kubenswrapper[4835]: I0129 15:52:04.038891 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:52:04 crc kubenswrapper[4835]: E0129 15:52:04.039259 4835 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:52:04 crc kubenswrapper[4835]: E0129 15:52:04.039362 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert podName:9674c18b-3676-4101-b206-5048d14862af nodeName:}" failed. No retries permitted until 2026-01-29 15:52:12.039335975 +0000 UTC m=+1434.232379639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" (UID: "9674c18b-3676-4101-b206-5048d14862af") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 15:52:04 crc kubenswrapper[4835]: I0129 15:52:04.544574 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:04 crc kubenswrapper[4835]: E0129 15:52:04.544722 4835 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 15:52:04 crc kubenswrapper[4835]: I0129 15:52:04.544697 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:04 crc kubenswrapper[4835]: E0129 15:52:04.544777 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:52:12.544763174 +0000 UTC m=+1434.737806828 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "metrics-server-cert" not found Jan 29 15:52:04 crc kubenswrapper[4835]: E0129 15:52:04.544831 4835 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 15:52:04 crc kubenswrapper[4835]: E0129 15:52:04.544872 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs podName:538c332f-1a4e-4bf8-b6be-12aae176c5ab nodeName:}" failed. No retries permitted until 2026-01-29 15:52:12.544859806 +0000 UTC m=+1434.737903480 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-mrx86" (UID: "538c332f-1a4e-4bf8-b6be-12aae176c5ab") : secret "webhook-server-cert" not found Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.669413 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hjvxh"] Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.670899 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.682571 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hjvxh"] Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.776426 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-utilities\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.776812 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-catalog-content\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.776902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mglp\" (UniqueName: \"kubernetes.io/projected/7c0649a1-77a6-4e5c-a6f1-d96480a91155-kube-api-access-4mglp\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.878098 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-utilities\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.878148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-catalog-content\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.878233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mglp\" (UniqueName: \"kubernetes.io/projected/7c0649a1-77a6-4e5c-a6f1-d96480a91155-kube-api-access-4mglp\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.878832 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-utilities\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.878859 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-catalog-content\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.899044 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mglp\" (UniqueName: \"kubernetes.io/projected/7c0649a1-77a6-4e5c-a6f1-d96480a91155-kube-api-access-4mglp\") pod \"redhat-operators-hjvxh\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:05 crc kubenswrapper[4835]: I0129 15:52:05.993779 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.159092 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.159800 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2t7dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-g9pk9_openstack-operators(d82b7063-6a29-4b42-807d-f49f8881be3e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.161083 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" podUID="d82b7063-6a29-4b42-807d-f49f8881be3e" Jan 29 15:52:11 crc kubenswrapper[4835]: I0129 15:52:11.360610 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.361161 4835 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.361233 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert podName:adcf669a-6e0c-4cd6-a753-6ab8a5f15a25 nodeName:}" failed. No retries permitted until 2026-01-29 15:52:27.361215062 +0000 UTC m=+1449.554258716 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert") pod "infra-operator-controller-manager-79955696d6-t4z4q" (UID: "adcf669a-6e0c-4cd6-a753-6ab8a5f15a25") : secret "infra-operator-webhook-server-cert" not found Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.816146 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" podUID="d82b7063-6a29-4b42-807d-f49f8881be3e" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.825861 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.826102 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfb2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-jgc8x_openstack-operators(3276caaa-e664-4dff-961d-0a4121d4c6a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:11 crc kubenswrapper[4835]: E0129 15:52:11.827388 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" podUID="3276caaa-e664-4dff-961d-0a4121d4c6a1" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.071784 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.078911 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9674c18b-3676-4101-b206-5048d14862af-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp\" (UID: \"9674c18b-3676-4101-b206-5048d14862af\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.085658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.579403 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.579993 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.584807 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.594117 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538c332f-1a4e-4bf8-b6be-12aae176c5ab-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-mrx86\" (UID: \"538c332f-1a4e-4bf8-b6be-12aae176c5ab\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:12 crc kubenswrapper[4835]: E0129 15:52:12.722628 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c" Jan 29 15:52:12 crc kubenswrapper[4835]: E0129 15:52:12.722948 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zsrjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b6c4d8c5f-bm5sf_openstack-operators(b19d416a-a9ff-435d-bf52-edd87d12da46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:12 crc kubenswrapper[4835]: E0129 15:52:12.724176 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" podUID="b19d416a-a9ff-435d-bf52-edd87d12da46" Jan 29 15:52:12 crc kubenswrapper[4835]: I0129 15:52:12.819542 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:12 crc kubenswrapper[4835]: E0129 15:52:12.828310 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" podUID="3276caaa-e664-4dff-961d-0a4121d4c6a1" Jan 29 15:52:12 crc kubenswrapper[4835]: E0129 15:52:12.828410 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:379470e2752f286e73908e94233e884922b231169a5521a59f53843a2dc3184c\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" podUID="b19d416a-a9ff-435d-bf52-edd87d12da46" Jan 29 15:52:13 crc kubenswrapper[4835]: E0129 15:52:13.649901 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 15:52:13 crc kubenswrapper[4835]: E0129 15:52:13.650186 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rzwzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-gvw9q_openstack-operators(a333de09-dfc0-446d-85d1-08f31bf6c540): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:13 crc kubenswrapper[4835]: E0129 15:52:13.651769 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" podUID="a333de09-dfc0-446d-85d1-08f31bf6c540" Jan 29 15:52:13 crc kubenswrapper[4835]: E0129 15:52:13.841241 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" podUID="a333de09-dfc0-446d-85d1-08f31bf6c540" Jan 29 15:52:27 crc kubenswrapper[4835]: I0129 15:52:27.412763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:27 crc kubenswrapper[4835]: I0129 15:52:27.419692 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/adcf669a-6e0c-4cd6-a753-6ab8a5f15a25-cert\") pod \"infra-operator-controller-manager-79955696d6-t4z4q\" (UID: \"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:27 crc kubenswrapper[4835]: E0129 15:52:27.659242 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 15:52:27 crc kubenswrapper[4835]: E0129 15:52:27.659459 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swf4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-pvll8_openstack-operators(514ee3fd-e758-471d-8378-b5b1c566ba36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:27 crc kubenswrapper[4835]: E0129 15:52:27.660634 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" podUID="514ee3fd-e758-471d-8378-b5b1c566ba36" Jan 29 15:52:27 crc kubenswrapper[4835]: I0129 15:52:27.673177 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vr26c" Jan 29 15:52:27 crc kubenswrapper[4835]: I0129 15:52:27.681881 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:27 crc kubenswrapper[4835]: E0129 15:52:27.944723 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" podUID="514ee3fd-e758-471d-8378-b5b1c566ba36" Jan 29 15:52:28 crc kubenswrapper[4835]: E0129 15:52:28.674082 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 29 15:52:28 crc kubenswrapper[4835]: E0129 15:52:28.674270 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqk5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-rmmtl_openstack-operators(16a82779-3964-4e89-ab3f-162ab6ea8141): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:28 crc kubenswrapper[4835]: E0129 15:52:28.675583 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" podUID="16a82779-3964-4e89-ab3f-162ab6ea8141" Jan 29 15:52:29 crc kubenswrapper[4835]: E0129 15:52:29.314637 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 29 15:52:29 crc kubenswrapper[4835]: E0129 15:52:29.315005 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rv228,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-qsjxl_openstack-operators(1b169eb7-1896-40cd-aefd-fc7b69b2bbdf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:29 crc kubenswrapper[4835]: E0129 15:52:29.316250 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" podUID="1b169eb7-1896-40cd-aefd-fc7b69b2bbdf" Jan 29 15:52:29 crc kubenswrapper[4835]: E0129 15:52:29.846365 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898" Jan 29 15:52:29 crc kubenswrapper[4835]: E0129 15:52:29.846842 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96p4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-8d874c8fc-5n2pr_openstack-operators(35afdcf5-a791-4fb2-aeea-0200ff95edf5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:52:29 crc kubenswrapper[4835]: E0129 15:52:29.848077 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" podUID="35afdcf5-a791-4fb2-aeea-0200ff95edf5" Jan 29 15:52:30 crc kubenswrapper[4835]: I0129 15:52:30.260039 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hjvxh"] Jan 29 15:52:32 crc kubenswrapper[4835]: W0129 15:52:32.289189 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c0649a1_77a6_4e5c_a6f1_d96480a91155.slice/crio-d25f730a17a3a5ac6f2cc9f845c97394ee79ea430e1f4daa440db54fa09251f5 WatchSource:0}: Error finding container d25f730a17a3a5ac6f2cc9f845c97394ee79ea430e1f4daa440db54fa09251f5: Status 404 returned error can't find the container with id d25f730a17a3a5ac6f2cc9f845c97394ee79ea430e1f4daa440db54fa09251f5 Jan 29 15:52:32 crc kubenswrapper[4835]: I0129 15:52:32.977338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerStarted","Data":"d25f730a17a3a5ac6f2cc9f845c97394ee79ea430e1f4daa440db54fa09251f5"} Jan 29 15:52:33 crc kubenswrapper[4835]: I0129 15:52:33.990766 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" event={"ID":"0337a106-8b4b-41c0-824c-394d50c9ad6d","Type":"ContainerStarted","Data":"48ede20c31f860245164440bf41f4d3911bf398727efe174393025802b972634"} Jan 29 15:52:34 crc kubenswrapper[4835]: I0129 15:52:34.217444 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q"] Jan 29 15:52:34 crc kubenswrapper[4835]: I0129 15:52:34.228931 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86"] Jan 29 15:52:34 crc kubenswrapper[4835]: I0129 15:52:34.264345 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp"] Jan 29 15:52:34 crc kubenswrapper[4835]: W0129 15:52:34.284478 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9674c18b_3676_4101_b206_5048d14862af.slice/crio-1ee71eddbdf8778cb7ac319c250af72683400db2a835634f2d1b08a993ac3f45 WatchSource:0}: Error finding container 1ee71eddbdf8778cb7ac319c250af72683400db2a835634f2d1b08a993ac3f45: Status 404 returned error can't find the container with id 1ee71eddbdf8778cb7ac319c250af72683400db2a835634f2d1b08a993ac3f45 Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.019701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" event={"ID":"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25","Type":"ContainerStarted","Data":"45c9671de659abffc7c1b4cbe977433e86509e9cabaccdbb0de930de8e94b716"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.038749 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" event={"ID":"9674c18b-3676-4101-b206-5048d14862af","Type":"ContainerStarted","Data":"1ee71eddbdf8778cb7ac319c250af72683400db2a835634f2d1b08a993ac3f45"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.053769 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" event={"ID":"5b5ab528-bcc6-47f2-a7f0-798dc4a09acc","Type":"ContainerStarted","Data":"6fef6be33528348c7b18f8f211a674fbae2c8fb20b1db63e86c45856b053d125"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.055079 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.073001 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" event={"ID":"53c52d1f-17be-421f-8f5e-73cf6aefc28d","Type":"ContainerStarted","Data":"a2113282d57f1144221b4287ee452e7e397c2797ce3c022d9952db940c92b984"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.073819 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.092771 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" event={"ID":"c13709c8-6829-4e4f-9acd-46aeaa147049","Type":"ContainerStarted","Data":"59b74a52480c430a001edf8a91ba0d59b0be5516934eac6da1341c732050900a"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.093588 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.109410 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" event={"ID":"a5745c9f-9837-415c-ac71-7138f6e3a3cc","Type":"ContainerStarted","Data":"14655621c2f9254e4b725a97e3da77b5cd24fdec012f7988da4198345a95086a"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.132635 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" event={"ID":"41e42cf7-1c34-41ad-ab8f-0fd362f01f58","Type":"ContainerStarted","Data":"c7db57b42105086b61510567488e0b91850ff081b76d40665bed02b2e2a6112e"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.133145 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.165889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" event={"ID":"21915d77-1cfb-4c42-827a-a918f63224b5","Type":"ContainerStarted","Data":"15b956c6751d4f89fcd9bcca618c171a34906dbf2e665b8d6bfe67e2760a87d1"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.166675 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.188826 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" event={"ID":"0e32b624-7997-4c99-a4d9-dff4da47132e","Type":"ContainerStarted","Data":"bca97ffe36ce776dae27580e32c374b8ca95dc1a384df24ec9d51ead2051c182"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.188849 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" podStartSLOduration=8.658824599 podStartE2EDuration="40.188826501s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.029666761 +0000 UTC m=+1419.222710415" lastFinishedPulling="2026-01-29 15:52:28.559668663 +0000 UTC m=+1450.752712317" observedRunningTime="2026-01-29 15:52:35.177143422 +0000 UTC m=+1457.370187076" watchObservedRunningTime="2026-01-29 15:52:35.188826501 +0000 UTC m=+1457.381870165" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.189623 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.222779 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" event={"ID":"d82b7063-6a29-4b42-807d-f49f8881be3e","Type":"ContainerStarted","Data":"b018a72b18edf8991f630df1a6b1eef7148f62f659a2ccb063ac88347d4b5b77"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.223648 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.242266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" event={"ID":"3276caaa-e664-4dff-961d-0a4121d4c6a1","Type":"ContainerStarted","Data":"a56d984c4348ca891b3ef8731bcff09249cf121045a67b545a58348e967c00af"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.243037 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.253927 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" podStartSLOduration=9.685448158 podStartE2EDuration="40.253900883s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.990289105 +0000 UTC m=+1420.183332759" lastFinishedPulling="2026-01-29 15:52:28.55874183 +0000 UTC m=+1450.751785484" observedRunningTime="2026-01-29 15:52:35.24328298 +0000 UTC m=+1457.436326634" watchObservedRunningTime="2026-01-29 15:52:35.253900883 +0000 UTC m=+1457.446944537" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.262657 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" event={"ID":"538c332f-1a4e-4bf8-b6be-12aae176c5ab","Type":"ContainerStarted","Data":"7f2230ecaaffe548b9a86b1a5f77e539801687c37151f5f6a3a29e5533a4aa38"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.262693 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" event={"ID":"538c332f-1a4e-4bf8-b6be-12aae176c5ab","Type":"ContainerStarted","Data":"f2d21aa5fe8f38d12f01e04c09ceee73612204d5296477457e299b6af392494c"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.263306 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.273432 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" event={"ID":"a333de09-dfc0-446d-85d1-08f31bf6c540","Type":"ContainerStarted","Data":"9d3968357f1f1776189a56ce26ee5dc5179ad65cd4286bbf22d681c655ae13ca"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.274242 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.276144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" event={"ID":"d3d2d6b0-b259-4030-a685-d9e984324c58","Type":"ContainerStarted","Data":"aad11e8b02231add2ef75ea2f57f0715366fafc7e6e0e376bd499e48f3e3d1c9"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.276715 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.293193 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" podStartSLOduration=9.082616236 podStartE2EDuration="40.293177596s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.08212476 +0000 UTC m=+1420.275168414" lastFinishedPulling="2026-01-29 15:52:29.2926861 +0000 UTC m=+1451.485729774" observedRunningTime="2026-01-29 15:52:35.289787652 +0000 UTC m=+1457.482831306" watchObservedRunningTime="2026-01-29 15:52:35.293177596 +0000 UTC m=+1457.486221250" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.294964 4835 generic.go:334] "Generic (PLEG): container finished" podID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerID="96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b" exitCode=0 Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.295993 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerDied","Data":"96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.315760 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" event={"ID":"b19d416a-a9ff-435d-bf52-edd87d12da46","Type":"ContainerStarted","Data":"5827ad6b480bf99386b92db992790062bec0038aa431f0de94027c1023068abc"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.316362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.324403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" event={"ID":"6716e375-4195-40bf-a20b-2a716135e4ef","Type":"ContainerStarted","Data":"7f7253cd0be490c7123259839918016d0bf206f82983e35934060b4584f55561"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.325037 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.331077 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" podStartSLOduration=9.624289143 podStartE2EDuration="40.331056994s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.852749639 +0000 UTC m=+1420.045793293" lastFinishedPulling="2026-01-29 15:52:28.55951749 +0000 UTC m=+1450.752561144" observedRunningTime="2026-01-29 15:52:35.325477746 +0000 UTC m=+1457.518521400" watchObservedRunningTime="2026-01-29 15:52:35.331056994 +0000 UTC m=+1457.524100638" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.350758 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" event={"ID":"2df55346-97f6-4abe-b4f5-4044995f7c2c","Type":"ContainerStarted","Data":"6beeaab7b07cd2123261bad7171def2bf3f3c2b16ffe4a97897694d03d9953bc"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.351258 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.365738 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-b57bv" podStartSLOduration=3.428100878 podStartE2EDuration="39.365722163s" podCreationTimestamp="2026-01-29 15:51:56 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.243798085 +0000 UTC m=+1420.436841729" lastFinishedPulling="2026-01-29 15:52:34.18141936 +0000 UTC m=+1456.374463014" observedRunningTime="2026-01-29 15:52:35.363898218 +0000 UTC m=+1457.556941872" watchObservedRunningTime="2026-01-29 15:52:35.365722163 +0000 UTC m=+1457.558765817" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.387738 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" event={"ID":"7cefb6c7-0870-48df-8246-c69e8cbf82c6","Type":"ContainerStarted","Data":"f0d6f22db03cb984ae0e146b83980ff2996ad3d83ba0bd51dc162a5b25fa18e1"} Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.387779 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.387792 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.395880 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" podStartSLOduration=9.143020352 podStartE2EDuration="40.395859179s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.040023307 +0000 UTC m=+1420.233066961" lastFinishedPulling="2026-01-29 15:52:29.292862134 +0000 UTC m=+1451.485905788" observedRunningTime="2026-01-29 15:52:35.394745272 +0000 UTC m=+1457.587788926" watchObservedRunningTime="2026-01-29 15:52:35.395859179 +0000 UTC m=+1457.588902833" Jan 29 15:52:35 crc kubenswrapper[4835]: E0129 15:52:35.462594 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:52:35 crc kubenswrapper[4835]: E0129 15:52:35.462729 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mglp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hjvxh_openshift-marketplace(7c0649a1-77a6-4e5c-a6f1-d96480a91155): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:35 crc kubenswrapper[4835]: E0129 15:52:35.466385 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.557857 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" podStartSLOduration=4.999789258 podStartE2EDuration="40.557839972s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.17781973 +0000 UTC m=+1420.370863384" lastFinishedPulling="2026-01-29 15:52:33.735870444 +0000 UTC m=+1455.928914098" observedRunningTime="2026-01-29 15:52:35.490805041 +0000 UTC m=+1457.683848695" watchObservedRunningTime="2026-01-29 15:52:35.557839972 +0000 UTC m=+1457.750883626" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.564299 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" podStartSLOduration=39.564276521 podStartE2EDuration="39.564276521s" podCreationTimestamp="2026-01-29 15:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:52:35.556046237 +0000 UTC m=+1457.749089901" watchObservedRunningTime="2026-01-29 15:52:35.564276521 +0000 UTC m=+1457.757320175" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.589617 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" podStartSLOduration=9.11828209 podStartE2EDuration="40.589597908s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.820730906 +0000 UTC m=+1420.013774560" lastFinishedPulling="2026-01-29 15:52:29.292046714 +0000 UTC m=+1451.485090378" observedRunningTime="2026-01-29 15:52:35.587020164 +0000 UTC m=+1457.780063818" watchObservedRunningTime="2026-01-29 15:52:35.589597908 +0000 UTC m=+1457.782641562" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.614657 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" podStartSLOduration=8.832560072 podStartE2EDuration="40.614634168s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.074811489 +0000 UTC m=+1420.267855143" lastFinishedPulling="2026-01-29 15:52:29.856885545 +0000 UTC m=+1452.049929239" observedRunningTime="2026-01-29 15:52:35.61227795 +0000 UTC m=+1457.805321604" watchObservedRunningTime="2026-01-29 15:52:35.614634168 +0000 UTC m=+1457.807677822" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.685450 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" podStartSLOduration=9.407682797 podStartE2EDuration="40.685430562s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.015445198 +0000 UTC m=+1420.208488852" lastFinishedPulling="2026-01-29 15:52:29.293192963 +0000 UTC m=+1451.486236617" observedRunningTime="2026-01-29 15:52:35.65103104 +0000 UTC m=+1457.844074714" watchObservedRunningTime="2026-01-29 15:52:35.685430562 +0000 UTC m=+1457.878474226" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.689966 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" podStartSLOduration=4.28603397 podStartE2EDuration="40.689947324s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.317174253 +0000 UTC m=+1419.510217907" lastFinishedPulling="2026-01-29 15:52:33.721087597 +0000 UTC m=+1455.914131261" observedRunningTime="2026-01-29 15:52:35.68455749 +0000 UTC m=+1457.877601164" watchObservedRunningTime="2026-01-29 15:52:35.689947324 +0000 UTC m=+1457.882990978" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.733626 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" podStartSLOduration=5.240310986 podStartE2EDuration="40.733603625s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.243752714 +0000 UTC m=+1420.436796368" lastFinishedPulling="2026-01-29 15:52:33.737045313 +0000 UTC m=+1455.930089007" observedRunningTime="2026-01-29 15:52:35.719817704 +0000 UTC m=+1457.912861358" watchObservedRunningTime="2026-01-29 15:52:35.733603625 +0000 UTC m=+1457.926647269" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.774703 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" podStartSLOduration=4.988320795 podStartE2EDuration="40.774684323s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.933912709 +0000 UTC m=+1420.126956363" lastFinishedPulling="2026-01-29 15:52:33.720276227 +0000 UTC m=+1455.913319891" observedRunningTime="2026-01-29 15:52:35.766865249 +0000 UTC m=+1457.959908903" watchObservedRunningTime="2026-01-29 15:52:35.774684323 +0000 UTC m=+1457.967727987" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.795003 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" podStartSLOduration=10.095829163 podStartE2EDuration="40.794986926s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:57.859402123 +0000 UTC m=+1420.052445777" lastFinishedPulling="2026-01-29 15:52:28.558559886 +0000 UTC m=+1450.751603540" observedRunningTime="2026-01-29 15:52:35.79356412 +0000 UTC m=+1457.986607774" watchObservedRunningTime="2026-01-29 15:52:35.794986926 +0000 UTC m=+1457.988030580" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.824178 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" podStartSLOduration=5.34748853 podStartE2EDuration="40.824157708s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.243739683 +0000 UTC m=+1420.436783337" lastFinishedPulling="2026-01-29 15:52:33.720408821 +0000 UTC m=+1455.913452515" observedRunningTime="2026-01-29 15:52:35.815417462 +0000 UTC m=+1458.008461116" watchObservedRunningTime="2026-01-29 15:52:35.824157708 +0000 UTC m=+1458.017201362" Jan 29 15:52:35 crc kubenswrapper[4835]: I0129 15:52:35.863726 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" podStartSLOduration=5.370260695 podStartE2EDuration="40.863707268s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.225561413 +0000 UTC m=+1420.418605067" lastFinishedPulling="2026-01-29 15:52:33.719007966 +0000 UTC m=+1455.912051640" observedRunningTime="2026-01-29 15:52:35.838999596 +0000 UTC m=+1458.032043260" watchObservedRunningTime="2026-01-29 15:52:35.863707268 +0000 UTC m=+1458.056750922" Jan 29 15:52:36 crc kubenswrapper[4835]: E0129 15:52:36.400072 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:52:38 crc kubenswrapper[4835]: I0129 15:52:38.410734 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" event={"ID":"9674c18b-3676-4101-b206-5048d14862af","Type":"ContainerStarted","Data":"3d8aa9a8c53e40f2bb8cacd56f1883085573e77e20d3c84f3312efed795c12a0"} Jan 29 15:52:38 crc kubenswrapper[4835]: I0129 15:52:38.411106 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:52:38 crc kubenswrapper[4835]: I0129 15:52:38.414191 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" event={"ID":"adcf669a-6e0c-4cd6-a753-6ab8a5f15a25","Type":"ContainerStarted","Data":"4299b300d1f07aa4ecd7e6a9ef6c5c7d9b0515467b0ada40778d922d3970d1e0"} Jan 29 15:52:38 crc kubenswrapper[4835]: I0129 15:52:38.414717 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:38 crc kubenswrapper[4835]: I0129 15:52:38.434391 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" podStartSLOduration=39.604092948 podStartE2EDuration="43.434376221s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:52:34.286594835 +0000 UTC m=+1456.479638489" lastFinishedPulling="2026-01-29 15:52:38.116878108 +0000 UTC m=+1460.309921762" observedRunningTime="2026-01-29 15:52:38.43314351 +0000 UTC m=+1460.626187164" watchObservedRunningTime="2026-01-29 15:52:38.434376221 +0000 UTC m=+1460.627419875" Jan 29 15:52:38 crc kubenswrapper[4835]: I0129 15:52:38.450486 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" podStartSLOduration=39.580538355 podStartE2EDuration="43.450444239s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:52:34.246679196 +0000 UTC m=+1456.439722850" lastFinishedPulling="2026-01-29 15:52:38.11658508 +0000 UTC m=+1460.309628734" observedRunningTime="2026-01-29 15:52:38.448024329 +0000 UTC m=+1460.641067993" watchObservedRunningTime="2026-01-29 15:52:38.450444239 +0000 UTC m=+1460.643487893" Jan 29 15:52:40 crc kubenswrapper[4835]: E0129 15:52:40.493689 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" podUID="16a82779-3964-4e89-ab3f-162ab6ea8141" Jan 29 15:52:42 crc kubenswrapper[4835]: I0129 15:52:42.443237 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" event={"ID":"514ee3fd-e758-471d-8378-b5b1c566ba36","Type":"ContainerStarted","Data":"b057d4a5b5b4ff302022ac3ac4b5d5c8748ed973c5876f6ab9662f43f208327f"} Jan 29 15:52:42 crc kubenswrapper[4835]: I0129 15:52:42.443783 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:52:42 crc kubenswrapper[4835]: I0129 15:52:42.460753 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" podStartSLOduration=3.548842911 podStartE2EDuration="47.460733381s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.081840213 +0000 UTC m=+1420.274883867" lastFinishedPulling="2026-01-29 15:52:41.993730683 +0000 UTC m=+1464.186774337" observedRunningTime="2026-01-29 15:52:42.456825944 +0000 UTC m=+1464.649869618" watchObservedRunningTime="2026-01-29 15:52:42.460733381 +0000 UTC m=+1464.653777035" Jan 29 15:52:42 crc kubenswrapper[4835]: E0129 15:52:42.501910 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" podUID="1b169eb7-1896-40cd-aefd-fc7b69b2bbdf" Jan 29 15:52:42 crc kubenswrapper[4835]: E0129 15:52:42.502008 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:6e21a1dda86ba365817102d23a5d4d2d5dcd1c4d8e5f8d74bd24548aa8c63898\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" podUID="35afdcf5-a791-4fb2-aeea-0200ff95edf5" Jan 29 15:52:42 crc kubenswrapper[4835]: I0129 15:52:42.825589 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-mrx86" Jan 29 15:52:45 crc kubenswrapper[4835]: I0129 15:52:45.776202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-x6zhz" Jan 29 15:52:45 crc kubenswrapper[4835]: I0129 15:52:45.796894 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-jgc8x" Jan 29 15:52:45 crc kubenswrapper[4835]: I0129 15:52:45.817401 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-8c998" Jan 29 15:52:45 crc kubenswrapper[4835]: I0129 15:52:45.889591 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-fq8mf" Jan 29 15:52:45 crc kubenswrapper[4835]: I0129 15:52:45.958879 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-6n6sr" Jan 29 15:52:45 crc kubenswrapper[4835]: I0129 15:52:45.959070 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-bm5sf" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.029053 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-n6nf2" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.043034 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-ftjdl" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.176020 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-g9pk9" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.266459 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-pbv9r" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.629342 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-dnwhm" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.656035 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-xmd6m" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.703280 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-gvw9q" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.794791 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-tlvw5" Jan 29 15:52:46 crc kubenswrapper[4835]: I0129 15:52:46.881573 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-bcmcn" Jan 29 15:52:47 crc kubenswrapper[4835]: I0129 15:52:47.687587 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-t4z4q" Jan 29 15:52:50 crc kubenswrapper[4835]: E0129 15:52:50.624498 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:52:50 crc kubenswrapper[4835]: E0129 15:52:50.624977 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mglp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hjvxh_openshift-marketplace(7c0649a1-77a6-4e5c-a6f1-d96480a91155): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:52:50 crc kubenswrapper[4835]: E0129 15:52:50.626896 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:52:52 crc kubenswrapper[4835]: I0129 15:52:52.093651 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp" Jan 29 15:52:52 crc kubenswrapper[4835]: I0129 15:52:52.519779 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" event={"ID":"16a82779-3964-4e89-ab3f-162ab6ea8141","Type":"ContainerStarted","Data":"b5eaf483e2b898c62f818aa0156db3dcc2b6664a00ee4f4a18aa6fc52fc181e3"} Jan 29 15:52:52 crc kubenswrapper[4835]: I0129 15:52:52.520321 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:52:52 crc kubenswrapper[4835]: I0129 15:52:52.536842 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" podStartSLOduration=2.651040952 podStartE2EDuration="56.536823379s" podCreationTimestamp="2026-01-29 15:51:56 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.295963517 +0000 UTC m=+1420.489007171" lastFinishedPulling="2026-01-29 15:52:52.181745944 +0000 UTC m=+1474.374789598" observedRunningTime="2026-01-29 15:52:52.53645384 +0000 UTC m=+1474.729497494" watchObservedRunningTime="2026-01-29 15:52:52.536823379 +0000 UTC m=+1474.729867033" Jan 29 15:52:55 crc kubenswrapper[4835]: I0129 15:52:55.925459 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-pvll8" Jan 29 15:52:57 crc kubenswrapper[4835]: I0129 15:52:57.569648 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" event={"ID":"1b169eb7-1896-40cd-aefd-fc7b69b2bbdf","Type":"ContainerStarted","Data":"f5279f84ea09faed4cd98114d41dba3c53eadd3f29b588d1c53412f619604395"} Jan 29 15:52:57 crc kubenswrapper[4835]: I0129 15:52:57.570217 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:52:57 crc kubenswrapper[4835]: I0129 15:52:57.590171 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" podStartSLOduration=4.345555685 podStartE2EDuration="1m2.590144885s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.229705016 +0000 UTC m=+1420.422748670" lastFinishedPulling="2026-01-29 15:52:56.474294216 +0000 UTC m=+1478.667337870" observedRunningTime="2026-01-29 15:52:57.589681353 +0000 UTC m=+1479.782725017" watchObservedRunningTime="2026-01-29 15:52:57.590144885 +0000 UTC m=+1479.783188539" Jan 29 15:52:58 crc kubenswrapper[4835]: I0129 15:52:58.577326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" event={"ID":"35afdcf5-a791-4fb2-aeea-0200ff95edf5","Type":"ContainerStarted","Data":"f542f50156159c99dd7453dee784bac84c354ae924be4f41f00e7b5b2af86747"} Jan 29 15:52:58 crc kubenswrapper[4835]: I0129 15:52:58.578879 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:52:58 crc kubenswrapper[4835]: I0129 15:52:58.595612 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" podStartSLOduration=3.662638099 podStartE2EDuration="1m3.595584409s" podCreationTimestamp="2026-01-29 15:51:55 +0000 UTC" firstStartedPulling="2026-01-29 15:51:58.189407657 +0000 UTC m=+1420.382451311" lastFinishedPulling="2026-01-29 15:52:58.122353967 +0000 UTC m=+1480.315397621" observedRunningTime="2026-01-29 15:52:58.593557599 +0000 UTC m=+1480.786601253" watchObservedRunningTime="2026-01-29 15:52:58.595584409 +0000 UTC m=+1480.788628063" Jan 29 15:53:04 crc kubenswrapper[4835]: E0129 15:53:04.495604 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:53:05 crc kubenswrapper[4835]: I0129 15:53:05.969949 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-5n2pr" Jan 29 15:53:06 crc kubenswrapper[4835]: I0129 15:53:06.785789 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-qsjxl" Jan 29 15:53:06 crc kubenswrapper[4835]: I0129 15:53:06.940356 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-rmmtl" Jan 29 15:53:20 crc kubenswrapper[4835]: E0129 15:53:20.296895 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:53:20 crc kubenswrapper[4835]: E0129 15:53:20.297478 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mglp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hjvxh_openshift-marketplace(7c0649a1-77a6-4e5c-a6f1-d96480a91155): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:53:20 crc kubenswrapper[4835]: E0129 15:53:20.298828 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.571386 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-qnv86"] Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.575326 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.580538 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.581217 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kdxkf" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.581354 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.581370 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.610840 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-qnv86"] Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.654529 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-dtq5m"] Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.657758 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.661278 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.699039 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlptf\" (UniqueName: \"kubernetes.io/projected/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-kube-api-access-wlptf\") pod \"dnsmasq-dns-84bb9d8bd9-qnv86\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.699163 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-config\") pod \"dnsmasq-dns-84bb9d8bd9-qnv86\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.705505 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-dtq5m"] Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.800825 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qp5s\" (UniqueName: \"kubernetes.io/projected/38c68723-9295-4800-89ae-45b4a8b0bf06-kube-api-access-4qp5s\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.800925 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-config\") pod \"dnsmasq-dns-84bb9d8bd9-qnv86\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.800972 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-dns-svc\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.801002 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-config\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.801026 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlptf\" (UniqueName: \"kubernetes.io/projected/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-kube-api-access-wlptf\") pod \"dnsmasq-dns-84bb9d8bd9-qnv86\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.803108 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-config\") pod \"dnsmasq-dns-84bb9d8bd9-qnv86\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.819418 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlptf\" (UniqueName: \"kubernetes.io/projected/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-kube-api-access-wlptf\") pod \"dnsmasq-dns-84bb9d8bd9-qnv86\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.902276 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-dns-svc\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.902335 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-config\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.902391 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qp5s\" (UniqueName: \"kubernetes.io/projected/38c68723-9295-4800-89ae-45b4a8b0bf06-kube-api-access-4qp5s\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.903077 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.903490 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-config\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.906617 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-dns-svc\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.922369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qp5s\" (UniqueName: \"kubernetes.io/projected/38c68723-9295-4800-89ae-45b4a8b0bf06-kube-api-access-4qp5s\") pod \"dnsmasq-dns-5f854695bc-dtq5m\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:24 crc kubenswrapper[4835]: I0129 15:53:24.990215 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:53:25 crc kubenswrapper[4835]: I0129 15:53:25.329291 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-qnv86"] Jan 29 15:53:25 crc kubenswrapper[4835]: I0129 15:53:25.427387 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-dtq5m"] Jan 29 15:53:25 crc kubenswrapper[4835]: W0129 15:53:25.429051 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38c68723_9295_4800_89ae_45b4a8b0bf06.slice/crio-f9521ccbb610d5098d42ccbbd5679c7cf6d36e35f580e223de14c86c606daf24 WatchSource:0}: Error finding container f9521ccbb610d5098d42ccbbd5679c7cf6d36e35f580e223de14c86c606daf24: Status 404 returned error can't find the container with id f9521ccbb610d5098d42ccbbd5679c7cf6d36e35f580e223de14c86c606daf24 Jan 29 15:53:25 crc kubenswrapper[4835]: I0129 15:53:25.784452 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" event={"ID":"ed6eb380-9d48-4a59-b66c-5416b8e9eb52","Type":"ContainerStarted","Data":"f9b4007d8d40f6fb7ec4b2468cdd1c56985e823abf48a2f0374709b5845abc8a"} Jan 29 15:53:25 crc kubenswrapper[4835]: I0129 15:53:25.785302 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" event={"ID":"38c68723-9295-4800-89ae-45b4a8b0bf06","Type":"ContainerStarted","Data":"f9521ccbb610d5098d42ccbbd5679c7cf6d36e35f580e223de14c86c606daf24"} Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.083671 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-dtq5m"] Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.117304 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-s6gt5"] Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.121340 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.135708 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-s6gt5"] Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.322883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8pcf\" (UniqueName: \"kubernetes.io/projected/a537592e-b4b7-4871-8990-57229798f6b1-kube-api-access-d8pcf\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.323010 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-config\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.323063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.423963 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-config\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.424036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.424089 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8pcf\" (UniqueName: \"kubernetes.io/projected/a537592e-b4b7-4871-8990-57229798f6b1-kube-api-access-d8pcf\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.424939 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.424951 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-config\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.446539 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8pcf\" (UniqueName: \"kubernetes.io/projected/a537592e-b4b7-4871-8990-57229798f6b1-kube-api-access-d8pcf\") pod \"dnsmasq-dns-744ffd65bc-s6gt5\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.744802 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.774875 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-qnv86"] Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.810577 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-r4nwn"] Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.811855 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.820052 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-r4nwn"] Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.939721 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-dns-svc\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.940094 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2m2\" (UniqueName: \"kubernetes.io/projected/dc422bf1-9bfb-42fb-b964-e0717259857b-kube-api-access-zb2m2\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:26 crc kubenswrapper[4835]: I0129 15:53:26.940145 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-config\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.041533 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-dns-svc\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.041646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2m2\" (UniqueName: \"kubernetes.io/projected/dc422bf1-9bfb-42fb-b964-e0717259857b-kube-api-access-zb2m2\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.041706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-config\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.043123 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-dns-svc\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.044162 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-config\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.075870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2m2\" (UniqueName: \"kubernetes.io/projected/dc422bf1-9bfb-42fb-b964-e0717259857b-kube-api-access-zb2m2\") pod \"dnsmasq-dns-95f5f6995-r4nwn\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.192821 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.272171 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.273451 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.276641 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.277051 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.277198 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.277564 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4jwjs" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.278884 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.278970 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.279231 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.281876 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.300358 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-s6gt5"] Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.454666 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455128 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/427d05dd-22a3-4557-b9b3-9b5a39ea4662-pod-info\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455167 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455195 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-server-conf\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455219 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455267 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455329 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455400 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455483 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455507 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nncc5\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-kube-api-access-nncc5\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.455536 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/427d05dd-22a3-4557-b9b3-9b5a39ea4662-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557350 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557409 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nncc5\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-kube-api-access-nncc5\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557437 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/427d05dd-22a3-4557-b9b3-9b5a39ea4662-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557504 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557532 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/427d05dd-22a3-4557-b9b3-9b5a39ea4662-pod-info\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557553 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557574 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-server-conf\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557635 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.557695 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.561013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.561879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.562891 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-server-conf\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.566311 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/427d05dd-22a3-4557-b9b3-9b5a39ea4662-pod-info\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.566693 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.569306 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.570364 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.579037 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.593144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.595216 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/427d05dd-22a3-4557-b9b3-9b5a39ea4662-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.606753 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.624375 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nncc5\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-kube-api-access-nncc5\") pod \"rabbitmq-server-0\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.656255 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.708129 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-r4nwn"] Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.886958 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" event={"ID":"a537592e-b4b7-4871-8990-57229798f6b1","Type":"ContainerStarted","Data":"76216b65a757452c30712611ca0837fc2e463fcc5782ee1dcb018171c0cead56"} Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.892555 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" event={"ID":"dc422bf1-9bfb-42fb-b964-e0717259857b","Type":"ContainerStarted","Data":"8dc0b76e5213425ea9f676225eb18f568987f0a15eacdbbfa0682d1fdfa65701"} Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.936587 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.937833 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946268 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946344 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jkk47" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946571 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946625 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946751 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946770 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.946800 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 15:53:27 crc kubenswrapper[4835]: I0129 15:53:27.961728 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066257 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnh4\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-kube-api-access-hpnh4\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066301 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066340 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066396 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066603 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066765 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066865 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066908 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372d1dd7-8b24-4652-b6bb-bce75af50e4e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.066950 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.067041 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372d1dd7-8b24-4652-b6bb-bce75af50e4e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.144379 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 15:53:28 crc kubenswrapper[4835]: W0129 15:53:28.162738 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod427d05dd_22a3_4557_b9b3_9b5a39ea4662.slice/crio-3d3f81f15872f435d1a42b9d08bae938c58195ed97563b06456f5d0455608a91 WatchSource:0}: Error finding container 3d3f81f15872f435d1a42b9d08bae938c58195ed97563b06456f5d0455608a91: Status 404 returned error can't find the container with id 3d3f81f15872f435d1a42b9d08bae938c58195ed97563b06456f5d0455608a91 Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168030 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnh4\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-kube-api-access-hpnh4\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168078 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168112 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168131 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168177 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168212 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168246 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168273 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168293 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372d1dd7-8b24-4652-b6bb-bce75af50e4e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168311 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.168343 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372d1dd7-8b24-4652-b6bb-bce75af50e4e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.169729 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.169839 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.170661 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.170989 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.171758 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.174301 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.174434 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.175439 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.182611 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372d1dd7-8b24-4652-b6bb-bce75af50e4e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.183248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372d1dd7-8b24-4652-b6bb-bce75af50e4e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.209868 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnh4\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-kube-api-access-hpnh4\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.220639 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.272830 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.820055 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.912755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"372d1dd7-8b24-4652-b6bb-bce75af50e4e","Type":"ContainerStarted","Data":"12a63b3856c819c9803c120fbbc205854c80b668711cefa49c884bde02925bce"} Jan 29 15:53:28 crc kubenswrapper[4835]: I0129 15:53:28.914719 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"427d05dd-22a3-4557-b9b3-9b5a39ea4662","Type":"ContainerStarted","Data":"3d3f81f15872f435d1a42b9d08bae938c58195ed97563b06456f5d0455608a91"} Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.525126 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.526619 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.530281 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.530389 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mjkcp" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.530805 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.531695 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.538437 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.540347 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703567 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703658 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jx4\" (UniqueName: \"kubernetes.io/projected/c0b8ebff-50e7-4e05-96f8-099448fe550f-kube-api-access-w5jx4\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703686 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703792 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703826 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.703871 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809037 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809117 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809148 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809178 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jx4\" (UniqueName: \"kubernetes.io/projected/c0b8ebff-50e7-4e05-96f8-099448fe550f-kube-api-access-w5jx4\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809211 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809312 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809926 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.809981 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.810926 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.812251 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-default\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.812728 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-kolla-config\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.813557 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.813686 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.824261 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.824287 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.837724 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jx4\" (UniqueName: \"kubernetes.io/projected/c0b8ebff-50e7-4e05-96f8-099448fe550f-kube-api-access-w5jx4\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:29 crc kubenswrapper[4835]: I0129 15:53:29.907684 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " pod="openstack/openstack-galera-0" Jan 29 15:53:30 crc kubenswrapper[4835]: I0129 15:53:30.147382 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 15:53:30 crc kubenswrapper[4835]: I0129 15:53:30.712185 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 15:53:30 crc kubenswrapper[4835]: I0129 15:53:30.968742 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0b8ebff-50e7-4e05-96f8-099448fe550f","Type":"ContainerStarted","Data":"07041679b04dff6150ab83b7c3e729ff700117f73d891407d533a99e257d29e0"} Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.007583 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.009097 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.011387 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-r2xv4" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.012730 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.012731 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.012738 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.018573 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133122 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133199 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqh2c\" (UniqueName: \"kubernetes.io/projected/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kube-api-access-bqh2c\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133230 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133280 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133323 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133353 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133376 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.133427 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.234997 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqh2c\" (UniqueName: \"kubernetes.io/projected/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kube-api-access-bqh2c\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235222 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235278 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235310 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235348 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.235426 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.236441 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.238447 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.239872 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.242791 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.243413 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.251603 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.262129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.266560 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.267982 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.271786 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqh2c\" (UniqueName: \"kubernetes.io/projected/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kube-api-access-bqh2c\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.272366 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.272566 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-t25bq" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.274914 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.297855 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.315327 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.336538 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.336584 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.336600 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2gc\" (UniqueName: \"kubernetes.io/projected/8d24538c-d668-497c-b4cd-4bff633b43fb-kube-api-access-9r2gc\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.336631 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-config-data\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.336672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-kolla-config\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.340860 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.437767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.437841 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.437860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r2gc\" (UniqueName: \"kubernetes.io/projected/8d24538c-d668-497c-b4cd-4bff633b43fb-kube-api-access-9r2gc\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.438157 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-config-data\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.438212 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-kolla-config\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.439061 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-kolla-config\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.439115 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-config-data\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.442928 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.446768 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.470020 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r2gc\" (UniqueName: \"kubernetes.io/projected/8d24538c-d668-497c-b4cd-4bff633b43fb-kube-api-access-9r2gc\") pod \"memcached-0\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.668405 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.891175 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 15:53:31 crc kubenswrapper[4835]: W0129 15:53:31.926836 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0cc44f_ffb4_4d53_8219_a3ce2bd31e05.slice/crio-7cba8d0e133aadb739ea643f6ade47d113a345698f925cd5992d9f9d01a9d5e0 WatchSource:0}: Error finding container 7cba8d0e133aadb739ea643f6ade47d113a345698f925cd5992d9f9d01a9d5e0: Status 404 returned error can't find the container with id 7cba8d0e133aadb739ea643f6ade47d113a345698f925cd5992d9f9d01a9d5e0 Jan 29 15:53:31 crc kubenswrapper[4835]: I0129 15:53:31.986229 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05","Type":"ContainerStarted","Data":"7cba8d0e133aadb739ea643f6ade47d113a345698f925cd5992d9f9d01a9d5e0"} Jan 29 15:53:32 crc kubenswrapper[4835]: I0129 15:53:32.305852 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 15:53:32 crc kubenswrapper[4835]: W0129 15:53:32.344873 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d24538c_d668_497c_b4cd_4bff633b43fb.slice/crio-3667df7ace3777ee72c552a8804ff519d80c14420f823a2d8d224d5f7aba52fd WatchSource:0}: Error finding container 3667df7ace3777ee72c552a8804ff519d80c14420f823a2d8d224d5f7aba52fd: Status 404 returned error can't find the container with id 3667df7ace3777ee72c552a8804ff519d80c14420f823a2d8d224d5f7aba52fd Jan 29 15:53:32 crc kubenswrapper[4835]: I0129 15:53:32.978322 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:53:32 crc kubenswrapper[4835]: I0129 15:53:32.980980 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 15:53:32 crc kubenswrapper[4835]: I0129 15:53:32.983555 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-smsbq" Jan 29 15:53:32 crc kubenswrapper[4835]: I0129 15:53:32.991820 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:53:33 crc kubenswrapper[4835]: I0129 15:53:33.001947 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8d24538c-d668-497c-b4cd-4bff633b43fb","Type":"ContainerStarted","Data":"3667df7ace3777ee72c552a8804ff519d80c14420f823a2d8d224d5f7aba52fd"} Jan 29 15:53:33 crc kubenswrapper[4835]: I0129 15:53:33.074436 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxg8\" (UniqueName: \"kubernetes.io/projected/28230ab4-6e4e-497e-9c31-22ff91eba741-kube-api-access-mzxg8\") pod \"kube-state-metrics-0\" (UID: \"28230ab4-6e4e-497e-9c31-22ff91eba741\") " pod="openstack/kube-state-metrics-0" Jan 29 15:53:33 crc kubenswrapper[4835]: I0129 15:53:33.175662 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxg8\" (UniqueName: \"kubernetes.io/projected/28230ab4-6e4e-497e-9c31-22ff91eba741-kube-api-access-mzxg8\") pod \"kube-state-metrics-0\" (UID: \"28230ab4-6e4e-497e-9c31-22ff91eba741\") " pod="openstack/kube-state-metrics-0" Jan 29 15:53:33 crc kubenswrapper[4835]: I0129 15:53:33.195822 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxg8\" (UniqueName: \"kubernetes.io/projected/28230ab4-6e4e-497e-9c31-22ff91eba741-kube-api-access-mzxg8\") pod \"kube-state-metrics-0\" (UID: \"28230ab4-6e4e-497e-9c31-22ff91eba741\") " pod="openstack/kube-state-metrics-0" Jan 29 15:53:33 crc kubenswrapper[4835]: I0129 15:53:33.310907 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 15:53:33 crc kubenswrapper[4835]: E0129 15:53:33.498514 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:53:33 crc kubenswrapper[4835]: I0129 15:53:33.887119 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.684700 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wvpc4"] Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.686225 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.687861 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sgn9w" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.688532 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.688660 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.694061 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-v49xf"] Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.695850 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.733538 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wvpc4"] Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run-ovn\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748388 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-ovn-controller-tls-certs\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmbc\" (UniqueName: \"kubernetes.io/projected/cc934dd0-048d-4030-986d-b76e0b966d73-kube-api-access-jwmbc\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748457 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6757e93d-f6ce-4dc5-8bc3-0de61411df89-scripts\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748614 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c29wv\" (UniqueName: \"kubernetes.io/projected/6757e93d-f6ce-4dc5-8bc3-0de61411df89-kube-api-access-c29wv\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748653 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-lib\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748697 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-etc-ovs\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748811 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-log\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc934dd0-048d-4030-986d-b76e0b966d73-scripts\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748864 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-combined-ca-bundle\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748907 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.748980 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-log-ovn\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.749011 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-run\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.749416 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v49xf"] Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850459 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-log\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850538 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc934dd0-048d-4030-986d-b76e0b966d73-scripts\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850564 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-combined-ca-bundle\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850643 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-run\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850663 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-log-ovn\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850689 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run-ovn\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850757 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-ovn-controller-tls-certs\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850781 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwmbc\" (UniqueName: \"kubernetes.io/projected/cc934dd0-048d-4030-986d-b76e0b966d73-kube-api-access-jwmbc\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850800 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6757e93d-f6ce-4dc5-8bc3-0de61411df89-scripts\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850822 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c29wv\" (UniqueName: \"kubernetes.io/projected/6757e93d-f6ce-4dc5-8bc3-0de61411df89-kube-api-access-c29wv\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850847 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-lib\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.850875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-etc-ovs\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.851435 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-etc-ovs\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.851650 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-log\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.853355 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc934dd0-048d-4030-986d-b76e0b966d73-scripts\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.855099 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.855152 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-run\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.855216 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-log-ovn\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.855280 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run-ovn\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.855381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-lib\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.861226 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-combined-ca-bundle\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.866432 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6757e93d-f6ce-4dc5-8bc3-0de61411df89-scripts\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.871022 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-ovn-controller-tls-certs\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.873997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c29wv\" (UniqueName: \"kubernetes.io/projected/6757e93d-f6ce-4dc5-8bc3-0de61411df89-kube-api-access-c29wv\") pod \"ovn-controller-wvpc4\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:36 crc kubenswrapper[4835]: I0129 15:53:36.874257 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwmbc\" (UniqueName: \"kubernetes.io/projected/cc934dd0-048d-4030-986d-b76e0b966d73-kube-api-access-jwmbc\") pod \"ovn-controller-ovs-v49xf\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.031722 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.050540 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.575823 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.576978 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.584083 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cr5w4" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.584375 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.587202 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.587575 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.587742 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.593628 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662112 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662164 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm87h\" (UniqueName: \"kubernetes.io/projected/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-kube-api-access-wm87h\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662185 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662210 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662225 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662249 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662318 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.662343 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763506 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763588 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763648 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763683 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm87h\" (UniqueName: \"kubernetes.io/projected/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-kube-api-access-wm87h\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763735 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763775 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.763829 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.764265 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.764339 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.764813 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-config\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.767404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.768366 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.770638 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.772110 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.779146 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm87h\" (UniqueName: \"kubernetes.io/projected/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-kube-api-access-wm87h\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.783404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:37 crc kubenswrapper[4835]: I0129 15:53:37.908220 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 15:53:38 crc kubenswrapper[4835]: I0129 15:53:38.043018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"28230ab4-6e4e-497e-9c31-22ff91eba741","Type":"ContainerStarted","Data":"b530027dc5e939b6100e0ce06ad986e1b85c9fd3c7c757ef6f6df904cff24e57"} Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.000019 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.001793 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.005736 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.006192 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.010717 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7fmh5" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.011028 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.028637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126489 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126530 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126579 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnh6q\" (UniqueName: \"kubernetes.io/projected/14f53619-6432-463f-b0df-5998c1f3298d-kube-api-access-qnh6q\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126603 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126625 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126641 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14f53619-6432-463f-b0df-5998c1f3298d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126659 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-config\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.126679 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228502 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnh6q\" (UniqueName: \"kubernetes.io/projected/14f53619-6432-463f-b0df-5998c1f3298d-kube-api-access-qnh6q\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228555 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228584 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228619 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14f53619-6432-463f-b0df-5998c1f3298d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228651 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-config\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228756 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.228780 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.229707 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14f53619-6432-463f-b0df-5998c1f3298d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.229783 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.230091 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-config\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.230537 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.242899 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.242914 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.243011 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.246484 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnh6q\" (UniqueName: \"kubernetes.io/projected/14f53619-6432-463f-b0df-5998c1f3298d-kube-api-access-qnh6q\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.256817 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:41 crc kubenswrapper[4835]: I0129 15:53:41.320874 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 15:53:51 crc kubenswrapper[4835]: I0129 15:53:51.615150 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:53:51 crc kubenswrapper[4835]: I0129 15:53:51.615851 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:53:54 crc kubenswrapper[4835]: E0129 15:53:54.050460 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:53:56 crc kubenswrapper[4835]: E0129 15:53:56.055679 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 15:53:56 crc kubenswrapper[4835]: E0129 15:53:56.056304 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlptf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-qnv86_openstack(ed6eb380-9d48-4a59-b66c-5416b8e9eb52): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:56 crc kubenswrapper[4835]: E0129 15:53:56.057530 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" podUID="ed6eb380-9d48-4a59-b66c-5416b8e9eb52" Jan 29 15:53:57 crc kubenswrapper[4835]: E0129 15:53:57.369341 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 15:53:57 crc kubenswrapper[4835]: E0129 15:53:57.371375 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nncc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(427d05dd-22a3-4557-b9b3-9b5a39ea4662): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:57 crc kubenswrapper[4835]: E0129 15:53:57.372586 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.184256 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.184668 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n74hffh55bh548h677h688hbfh568h5f7hdfh654h75h5d4h665hc5hf9h58dh54dh557h96h58bh5d9h58dh548h5c4h5bch99h5fh679h97h675h65q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r2gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(8d24538c-d668-497c-b4cd-4bff633b43fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.186978 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.219860 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.219859 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.237459 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.237642 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpnh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(372d1dd7-8b24-4652-b6bb-bce75af50e4e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.238782 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.267804 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.267989 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8pcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-s6gt5_openstack(a537592e-b4b7-4871-8990-57229798f6b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.269935 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" podUID="a537592e-b4b7-4871-8990-57229798f6b1" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.307008 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.307237 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb2m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-r4nwn_openstack(dc422bf1-9bfb-42fb-b964-e0717259857b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.311989 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.348110 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.348343 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4qp5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-dtq5m_openstack(38c68723-9295-4800-89ae-45b4a8b0bf06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:53:58 crc kubenswrapper[4835]: E0129 15:53:58.349624 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" podUID="38c68723-9295-4800-89ae-45b4a8b0bf06" Jan 29 15:53:59 crc kubenswrapper[4835]: E0129 15:53:59.225255 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" Jan 29 15:53:59 crc kubenswrapper[4835]: E0129 15:53:59.225396 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" Jan 29 15:53:59 crc kubenswrapper[4835]: E0129 15:53:59.225419 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" podUID="a537592e-b4b7-4871-8990-57229798f6b1" Jan 29 15:54:00 crc kubenswrapper[4835]: E0129 15:54:00.334615 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 15:54:00 crc kubenswrapper[4835]: E0129 15:54:00.335074 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqh2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:54:00 crc kubenswrapper[4835]: E0129 15:54:00.337330 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" Jan 29 15:54:00 crc kubenswrapper[4835]: E0129 15:54:00.337434 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 29 15:54:00 crc kubenswrapper[4835]: E0129 15:54:00.337598 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5jx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(c0b8ebff-50e7-4e05-96f8-099448fe550f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:54:00 crc kubenswrapper[4835]: E0129 15:54:00.338844 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.482935 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.501353 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.659801 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-dns-svc\") pod \"38c68723-9295-4800-89ae-45b4a8b0bf06\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.659864 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-config\") pod \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.659894 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qp5s\" (UniqueName: \"kubernetes.io/projected/38c68723-9295-4800-89ae-45b4a8b0bf06-kube-api-access-4qp5s\") pod \"38c68723-9295-4800-89ae-45b4a8b0bf06\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.660013 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-config\") pod \"38c68723-9295-4800-89ae-45b4a8b0bf06\" (UID: \"38c68723-9295-4800-89ae-45b4a8b0bf06\") " Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.660087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlptf\" (UniqueName: \"kubernetes.io/projected/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-kube-api-access-wlptf\") pod \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\" (UID: \"ed6eb380-9d48-4a59-b66c-5416b8e9eb52\") " Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.661836 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38c68723-9295-4800-89ae-45b4a8b0bf06" (UID: "38c68723-9295-4800-89ae-45b4a8b0bf06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.662337 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-config" (OuterVolumeSpecName: "config") pod "ed6eb380-9d48-4a59-b66c-5416b8e9eb52" (UID: "ed6eb380-9d48-4a59-b66c-5416b8e9eb52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.663463 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-config" (OuterVolumeSpecName: "config") pod "38c68723-9295-4800-89ae-45b4a8b0bf06" (UID: "38c68723-9295-4800-89ae-45b4a8b0bf06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.669179 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c68723-9295-4800-89ae-45b4a8b0bf06-kube-api-access-4qp5s" (OuterVolumeSpecName: "kube-api-access-4qp5s") pod "38c68723-9295-4800-89ae-45b4a8b0bf06" (UID: "38c68723-9295-4800-89ae-45b4a8b0bf06"). InnerVolumeSpecName "kube-api-access-4qp5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.673526 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-kube-api-access-wlptf" (OuterVolumeSpecName: "kube-api-access-wlptf") pod "ed6eb380-9d48-4a59-b66c-5416b8e9eb52" (UID: "ed6eb380-9d48-4a59-b66c-5416b8e9eb52"). InnerVolumeSpecName "kube-api-access-wlptf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.763025 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.763057 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.763071 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qp5s\" (UniqueName: \"kubernetes.io/projected/38c68723-9295-4800-89ae-45b4a8b0bf06-kube-api-access-4qp5s\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.763083 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c68723-9295-4800-89ae-45b4a8b0bf06-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.763094 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlptf\" (UniqueName: \"kubernetes.io/projected/ed6eb380-9d48-4a59-b66c-5416b8e9eb52-kube-api-access-wlptf\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.930260 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wvpc4"] Jan 29 15:54:00 crc kubenswrapper[4835]: I0129 15:54:00.981614 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.062283 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.252236 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3","Type":"ContainerStarted","Data":"9851194a389e66e60a9172c26c493ccffbd1c408a433b12d0ba975816c3b178e"} Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.253405 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4" event={"ID":"6757e93d-f6ce-4dc5-8bc3-0de61411df89","Type":"ContainerStarted","Data":"b991f76b6e3f9eafc20c65b87abce2a73767e594d3ccd39f2d4e964edada1f86"} Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.254770 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14f53619-6432-463f-b0df-5998c1f3298d","Type":"ContainerStarted","Data":"31a071e73b191ab60c9380db51c7f6fd216968334d3e627698a9aeec9313860d"} Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.256200 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" event={"ID":"38c68723-9295-4800-89ae-45b4a8b0bf06","Type":"ContainerDied","Data":"f9521ccbb610d5098d42ccbbd5679c7cf6d36e35f580e223de14c86c606daf24"} Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.256223 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-dtq5m" Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.258517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" event={"ID":"ed6eb380-9d48-4a59-b66c-5416b8e9eb52","Type":"ContainerDied","Data":"f9b4007d8d40f6fb7ec4b2468cdd1c56985e823abf48a2f0374709b5845abc8a"} Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.258634 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-qnv86" Jan 29 15:54:01 crc kubenswrapper[4835]: E0129 15:54:01.261458 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" Jan 29 15:54:01 crc kubenswrapper[4835]: E0129 15:54:01.261556 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.398420 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-dtq5m"] Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.410418 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-dtq5m"] Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.424975 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-qnv86"] Jan 29 15:54:01 crc kubenswrapper[4835]: I0129 15:54:01.433091 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-qnv86"] Jan 29 15:54:02 crc kubenswrapper[4835]: I0129 15:54:02.068500 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-v49xf"] Jan 29 15:54:02 crc kubenswrapper[4835]: I0129 15:54:02.503662 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c68723-9295-4800-89ae-45b4a8b0bf06" path="/var/lib/kubelet/pods/38c68723-9295-4800-89ae-45b4a8b0bf06/volumes" Jan 29 15:54:02 crc kubenswrapper[4835]: I0129 15:54:02.504543 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6eb380-9d48-4a59-b66c-5416b8e9eb52" path="/var/lib/kubelet/pods/ed6eb380-9d48-4a59-b66c-5416b8e9eb52/volumes" Jan 29 15:54:02 crc kubenswrapper[4835]: W0129 15:54:02.509443 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc934dd0_048d_4030_986d_b76e0b966d73.slice/crio-fcdcfef0169feb2cb341f0b0b1062319b54c7f2feaedf66711f36f73efcaa44f WatchSource:0}: Error finding container fcdcfef0169feb2cb341f0b0b1062319b54c7f2feaedf66711f36f73efcaa44f: Status 404 returned error can't find the container with id fcdcfef0169feb2cb341f0b0b1062319b54c7f2feaedf66711f36f73efcaa44f Jan 29 15:54:03 crc kubenswrapper[4835]: I0129 15:54:03.278483 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerStarted","Data":"fcdcfef0169feb2cb341f0b0b1062319b54c7f2feaedf66711f36f73efcaa44f"} Jan 29 15:54:07 crc kubenswrapper[4835]: E0129 15:54:07.106294 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:54:07 crc kubenswrapper[4835]: E0129 15:54:07.106770 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mglp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hjvxh_openshift-marketplace(7c0649a1-77a6-4e5c-a6f1-d96480a91155): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:54:07 crc kubenswrapper[4835]: E0129 15:54:07.107964 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:54:20 crc kubenswrapper[4835]: E0129 15:54:20.691529 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:54:21 crc kubenswrapper[4835]: I0129 15:54:21.425656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4" event={"ID":"6757e93d-f6ce-4dc5-8bc3-0de61411df89","Type":"ContainerStarted","Data":"a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840"} Jan 29 15:54:21 crc kubenswrapper[4835]: I0129 15:54:21.425975 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wvpc4" Jan 29 15:54:21 crc kubenswrapper[4835]: I0129 15:54:21.428700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerStarted","Data":"1126a1245e4d866681af419f64243fbad52d410573c374d6743124c8332dfde0"} Jan 29 15:54:21 crc kubenswrapper[4835]: I0129 15:54:21.448707 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wvpc4" podStartSLOduration=29.126877953 podStartE2EDuration="45.448684657s" podCreationTimestamp="2026-01-29 15:53:36 +0000 UTC" firstStartedPulling="2026-01-29 15:54:01.174331315 +0000 UTC m=+1543.367374979" lastFinishedPulling="2026-01-29 15:54:17.496138029 +0000 UTC m=+1559.689181683" observedRunningTime="2026-01-29 15:54:21.441777435 +0000 UTC m=+1563.634821089" watchObservedRunningTime="2026-01-29 15:54:21.448684657 +0000 UTC m=+1563.641728311" Jan 29 15:54:21 crc kubenswrapper[4835]: I0129 15:54:21.615315 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:54:21 crc kubenswrapper[4835]: I0129 15:54:21.615716 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.439637 4835 generic.go:334] "Generic (PLEG): container finished" podID="a537592e-b4b7-4871-8990-57229798f6b1" containerID="5328938cfab77d7070c27671dcccacc3f1dd947ac295e25c4820651d4d49c720" exitCode=0 Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.439781 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" event={"ID":"a537592e-b4b7-4871-8990-57229798f6b1","Type":"ContainerDied","Data":"5328938cfab77d7070c27671dcccacc3f1dd947ac295e25c4820651d4d49c720"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.446931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" event={"ID":"dc422bf1-9bfb-42fb-b964-e0717259857b","Type":"ContainerStarted","Data":"a6815e235eeb17a95b6c772864122d9f38dcdd678f33732ce16c2ce0f1dd5a08"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.450458 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8d24538c-d668-497c-b4cd-4bff633b43fb","Type":"ContainerStarted","Data":"f13f382fce2fe07b9bdea9fc7a6cf5b29127c26d9f07750ff6f018c3213666cb"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.451660 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.453601 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14f53619-6432-463f-b0df-5998c1f3298d","Type":"ContainerStarted","Data":"6bdfd393d80d8cc5d2c247ae44820ea1572128277c82ec3956cbbb2b6731b3c4"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.456834 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"28230ab4-6e4e-497e-9c31-22ff91eba741","Type":"ContainerStarted","Data":"8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.456917 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.458890 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05","Type":"ContainerStarted","Data":"cab68b1c20e1eeb639188943fc6dfaec2ed8876d8e5fed7d07634355f89cc27c"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.464439 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc934dd0-048d-4030-986d-b76e0b966d73" containerID="1126a1245e4d866681af419f64243fbad52d410573c374d6743124c8332dfde0" exitCode=0 Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.464600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerDied","Data":"1126a1245e4d866681af419f64243fbad52d410573c374d6743124c8332dfde0"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.467287 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0b8ebff-50e7-4e05-96f8-099448fe550f","Type":"ContainerStarted","Data":"159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.468953 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3","Type":"ContainerStarted","Data":"63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1"} Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.491665 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.860476113 podStartE2EDuration="51.491647951s" podCreationTimestamp="2026-01-29 15:53:31 +0000 UTC" firstStartedPulling="2026-01-29 15:53:32.350433519 +0000 UTC m=+1514.543477173" lastFinishedPulling="2026-01-29 15:54:20.981605357 +0000 UTC m=+1563.174649011" observedRunningTime="2026-01-29 15:54:22.486169775 +0000 UTC m=+1564.679213429" watchObservedRunningTime="2026-01-29 15:54:22.491647951 +0000 UTC m=+1564.684691605" Jan 29 15:54:22 crc kubenswrapper[4835]: I0129 15:54:22.537627 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=6.695395166 podStartE2EDuration="50.537605964s" podCreationTimestamp="2026-01-29 15:53:32 +0000 UTC" firstStartedPulling="2026-01-29 15:53:37.138690482 +0000 UTC m=+1519.331734136" lastFinishedPulling="2026-01-29 15:54:20.98090128 +0000 UTC m=+1563.173944934" observedRunningTime="2026-01-29 15:54:22.515035363 +0000 UTC m=+1564.708079017" watchObservedRunningTime="2026-01-29 15:54:22.537605964 +0000 UTC m=+1564.730649618" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.149264 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gpfb4"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.151920 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.154251 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.171805 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gpfb4"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.269997 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-combined-ca-bundle\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.270067 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp97\" (UniqueName: \"kubernetes.io/projected/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-kube-api-access-7mp97\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.270106 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.270276 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovs-rundir\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.270344 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-config\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.270444 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovn-rundir\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.319973 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-s6gt5"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.372902 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovn-rundir\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.373001 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-combined-ca-bundle\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.373044 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mp97\" (UniqueName: \"kubernetes.io/projected/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-kube-api-access-7mp97\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.373070 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.373121 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovs-rundir\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.373164 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-config\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.374023 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-config\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.374315 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovn-rundir\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.375081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovs-rundir\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.381454 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.385788 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-combined-ca-bundle\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.387488 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-c8klz"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.388699 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.397928 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.401154 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mp97\" (UniqueName: \"kubernetes.io/projected/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-kube-api-access-7mp97\") pod \"ovn-controller-metrics-gpfb4\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.468682 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.476327 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmpl\" (UniqueName: \"kubernetes.io/projected/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-kube-api-access-msmpl\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.477083 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-dns-svc\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.477262 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.477431 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-config\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.478310 4835 generic.go:334] "Generic (PLEG): container finished" podID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerID="a6815e235eeb17a95b6c772864122d9f38dcdd678f33732ce16c2ce0f1dd5a08" exitCode=0 Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.479557 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" event={"ID":"dc422bf1-9bfb-42fb-b964-e0717259857b","Type":"ContainerDied","Data":"a6815e235eeb17a95b6c772864122d9f38dcdd678f33732ce16c2ce0f1dd5a08"} Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.506125 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-c8klz"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.581522 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.582440 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.582909 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-config\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.583024 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmpl\" (UniqueName: \"kubernetes.io/projected/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-kube-api-access-msmpl\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.583069 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-dns-svc\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.591842 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-config\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.596067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-dns-svc\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.630113 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmpl\" (UniqueName: \"kubernetes.io/projected/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-kube-api-access-msmpl\") pod \"dnsmasq-dns-7878659675-c8klz\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.634824 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-r4nwn"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.703835 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-g6pdg"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.707486 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.712143 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-g6pdg"] Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.715211 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.782995 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.790854 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.791182 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.791243 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-config\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.791312 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxzrb\" (UniqueName: \"kubernetes.io/projected/6c992b4e-e5e9-4242-bace-70251be4386a-kube-api-access-xxzrb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.791787 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-dns-svc\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.892671 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.892713 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.892742 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-config\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.892767 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxzrb\" (UniqueName: \"kubernetes.io/projected/6c992b4e-e5e9-4242-bace-70251be4386a-kube-api-access-xxzrb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.892829 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-dns-svc\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.894703 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-config\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.895066 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.895485 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-dns-svc\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.895844 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:23 crc kubenswrapper[4835]: I0129 15:54:23.911899 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxzrb\" (UniqueName: \"kubernetes.io/projected/6c992b4e-e5e9-4242-bace-70251be4386a-kube-api-access-xxzrb\") pod \"dnsmasq-dns-586b989cdc-g6pdg\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.040378 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.094289 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gpfb4"] Jan 29 15:54:24 crc kubenswrapper[4835]: W0129 15:54:24.101219 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e7c01a6_4ad4_4b01_ab23_176e12337f1d.slice/crio-49d64c451c1cf40361a87c5fb0b0a99f68b4077c9b1ecf4e081624244f0d1bfb WatchSource:0}: Error finding container 49d64c451c1cf40361a87c5fb0b0a99f68b4077c9b1ecf4e081624244f0d1bfb: Status 404 returned error can't find the container with id 49d64c451c1cf40361a87c5fb0b0a99f68b4077c9b1ecf4e081624244f0d1bfb Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.253181 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-c8klz"] Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.291819 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-g6pdg"] Jan 29 15:54:24 crc kubenswrapper[4835]: W0129 15:54:24.295392 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c992b4e_e5e9_4242_bace_70251be4386a.slice/crio-1428ec4a5ccd095ddaadc24a16d973d4401f3fc51d6231a173a4dae06bd37e15 WatchSource:0}: Error finding container 1428ec4a5ccd095ddaadc24a16d973d4401f3fc51d6231a173a4dae06bd37e15: Status 404 returned error can't find the container with id 1428ec4a5ccd095ddaadc24a16d973d4401f3fc51d6231a173a4dae06bd37e15 Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.486917 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-c8klz" event={"ID":"b115ffa4-1248-4584-bfcd-6ecfa552ef9f","Type":"ContainerStarted","Data":"eb85c0f15164228cd2fac654cbea5b0c25ad54170d43730ab02a7aedf58caa59"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.490266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" event={"ID":"dc422bf1-9bfb-42fb-b964-e0717259857b","Type":"ContainerStarted","Data":"95673e5358d36b47c1e48b271a3c17a61a17ebc7c46665b23b98bc1f5b013609"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.490482 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerName="dnsmasq-dns" containerID="cri-o://95673e5358d36b47c1e48b271a3c17a61a17ebc7c46665b23b98bc1f5b013609" gracePeriod=10 Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.490785 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.502328 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"372d1dd7-8b24-4652-b6bb-bce75af50e4e","Type":"ContainerStarted","Data":"b8597f16a2219b2b8c34c3748733464d2ee66f76e90cbef7f856667a18a6d90d"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.510101 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerStarted","Data":"9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.519567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" event={"ID":"6c992b4e-e5e9-4242-bace-70251be4386a","Type":"ContainerStarted","Data":"1428ec4a5ccd095ddaadc24a16d973d4401f3fc51d6231a173a4dae06bd37e15"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.523731 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"427d05dd-22a3-4557-b9b3-9b5a39ea4662","Type":"ContainerStarted","Data":"64169b97629fcadaf1de1bd1f7d89edfecc0da8978b11c86487715e1aaeefbfb"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.525632 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" podStartSLOduration=4.555624906 podStartE2EDuration="58.52561225s" podCreationTimestamp="2026-01-29 15:53:26 +0000 UTC" firstStartedPulling="2026-01-29 15:53:27.733282211 +0000 UTC m=+1509.926325865" lastFinishedPulling="2026-01-29 15:54:21.703269515 +0000 UTC m=+1563.896313209" observedRunningTime="2026-01-29 15:54:24.518130344 +0000 UTC m=+1566.711173998" watchObservedRunningTime="2026-01-29 15:54:24.52561225 +0000 UTC m=+1566.718655894" Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.538190 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gpfb4" event={"ID":"7e7c01a6-4ad4-4b01-ab23-176e12337f1d","Type":"ContainerStarted","Data":"49d64c451c1cf40361a87c5fb0b0a99f68b4077c9b1ecf4e081624244f0d1bfb"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.540119 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" event={"ID":"a537592e-b4b7-4871-8990-57229798f6b1","Type":"ContainerStarted","Data":"f142a1c724ec164b0b19f783b97b5ebcaa7e49a6ac8124a92c429f2aced82020"} Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.540264 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" podUID="a537592e-b4b7-4871-8990-57229798f6b1" containerName="dnsmasq-dns" containerID="cri-o://f142a1c724ec164b0b19f783b97b5ebcaa7e49a6ac8124a92c429f2aced82020" gracePeriod=10 Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.540276 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:54:24 crc kubenswrapper[4835]: I0129 15:54:24.597540 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" podStartSLOduration=4.93511472 podStartE2EDuration="58.597509818s" podCreationTimestamp="2026-01-29 15:53:26 +0000 UTC" firstStartedPulling="2026-01-29 15:53:27.31765924 +0000 UTC m=+1509.510702894" lastFinishedPulling="2026-01-29 15:54:20.980054318 +0000 UTC m=+1563.173097992" observedRunningTime="2026-01-29 15:54:24.592733619 +0000 UTC m=+1566.785777273" watchObservedRunningTime="2026-01-29 15:54:24.597509818 +0000 UTC m=+1566.790553462" Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.549321 4835 generic.go:334] "Generic (PLEG): container finished" podID="6c992b4e-e5e9-4242-bace-70251be4386a" containerID="377d4e28d98ee1f75772a16b5d8a5b459ae5d8aefde000f162ff756c34c04cf6" exitCode=0 Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.549522 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" event={"ID":"6c992b4e-e5e9-4242-bace-70251be4386a","Type":"ContainerDied","Data":"377d4e28d98ee1f75772a16b5d8a5b459ae5d8aefde000f162ff756c34c04cf6"} Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.556806 4835 generic.go:334] "Generic (PLEG): container finished" podID="a537592e-b4b7-4871-8990-57229798f6b1" containerID="f142a1c724ec164b0b19f783b97b5ebcaa7e49a6ac8124a92c429f2aced82020" exitCode=0 Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.556884 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" event={"ID":"a537592e-b4b7-4871-8990-57229798f6b1","Type":"ContainerDied","Data":"f142a1c724ec164b0b19f783b97b5ebcaa7e49a6ac8124a92c429f2aced82020"} Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.559081 4835 generic.go:334] "Generic (PLEG): container finished" podID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerID="6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf" exitCode=0 Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.559157 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-c8klz" event={"ID":"b115ffa4-1248-4584-bfcd-6ecfa552ef9f","Type":"ContainerDied","Data":"6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf"} Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.573563 4835 generic.go:334] "Generic (PLEG): container finished" podID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerID="95673e5358d36b47c1e48b271a3c17a61a17ebc7c46665b23b98bc1f5b013609" exitCode=0 Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.573626 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" event={"ID":"dc422bf1-9bfb-42fb-b964-e0717259857b","Type":"ContainerDied","Data":"95673e5358d36b47c1e48b271a3c17a61a17ebc7c46665b23b98bc1f5b013609"} Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.577919 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerStarted","Data":"d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d"} Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.578701 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.578725 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.613582 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-v49xf" podStartSLOduration=31.146173077 podStartE2EDuration="49.613566724s" podCreationTimestamp="2026-01-29 15:53:36 +0000 UTC" firstStartedPulling="2026-01-29 15:54:02.512149399 +0000 UTC m=+1544.705193063" lastFinishedPulling="2026-01-29 15:54:20.979543036 +0000 UTC m=+1563.172586710" observedRunningTime="2026-01-29 15:54:25.610332534 +0000 UTC m=+1567.803376188" watchObservedRunningTime="2026-01-29 15:54:25.613566724 +0000 UTC m=+1567.806610378" Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.904782 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:54:25 crc kubenswrapper[4835]: I0129 15:54:25.941085 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.038149 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2m2\" (UniqueName: \"kubernetes.io/projected/dc422bf1-9bfb-42fb-b964-e0717259857b-kube-api-access-zb2m2\") pod \"dc422bf1-9bfb-42fb-b964-e0717259857b\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.038336 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-dns-svc\") pod \"a537592e-b4b7-4871-8990-57229798f6b1\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.038413 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-config\") pod \"dc422bf1-9bfb-42fb-b964-e0717259857b\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.038497 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8pcf\" (UniqueName: \"kubernetes.io/projected/a537592e-b4b7-4871-8990-57229798f6b1-kube-api-access-d8pcf\") pod \"a537592e-b4b7-4871-8990-57229798f6b1\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.038571 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-dns-svc\") pod \"dc422bf1-9bfb-42fb-b964-e0717259857b\" (UID: \"dc422bf1-9bfb-42fb-b964-e0717259857b\") " Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.038596 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-config\") pod \"a537592e-b4b7-4871-8990-57229798f6b1\" (UID: \"a537592e-b4b7-4871-8990-57229798f6b1\") " Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.046670 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc422bf1-9bfb-42fb-b964-e0717259857b-kube-api-access-zb2m2" (OuterVolumeSpecName: "kube-api-access-zb2m2") pod "dc422bf1-9bfb-42fb-b964-e0717259857b" (UID: "dc422bf1-9bfb-42fb-b964-e0717259857b"). InnerVolumeSpecName "kube-api-access-zb2m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.047613 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a537592e-b4b7-4871-8990-57229798f6b1-kube-api-access-d8pcf" (OuterVolumeSpecName: "kube-api-access-d8pcf") pod "a537592e-b4b7-4871-8990-57229798f6b1" (UID: "a537592e-b4b7-4871-8990-57229798f6b1"). InnerVolumeSpecName "kube-api-access-d8pcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.081865 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc422bf1-9bfb-42fb-b964-e0717259857b" (UID: "dc422bf1-9bfb-42fb-b964-e0717259857b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.083374 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-config" (OuterVolumeSpecName: "config") pod "a537592e-b4b7-4871-8990-57229798f6b1" (UID: "a537592e-b4b7-4871-8990-57229798f6b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.086597 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a537592e-b4b7-4871-8990-57229798f6b1" (UID: "a537592e-b4b7-4871-8990-57229798f6b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.092532 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-config" (OuterVolumeSpecName: "config") pod "dc422bf1-9bfb-42fb-b964-e0717259857b" (UID: "dc422bf1-9bfb-42fb-b964-e0717259857b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.140522 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb2m2\" (UniqueName: \"kubernetes.io/projected/dc422bf1-9bfb-42fb-b964-e0717259857b-kube-api-access-zb2m2\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.140559 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.140571 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.140581 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8pcf\" (UniqueName: \"kubernetes.io/projected/a537592e-b4b7-4871-8990-57229798f6b1-kube-api-access-d8pcf\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.140592 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc422bf1-9bfb-42fb-b964-e0717259857b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.140602 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a537592e-b4b7-4871-8990-57229798f6b1-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.594496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" event={"ID":"a537592e-b4b7-4871-8990-57229798f6b1","Type":"ContainerDied","Data":"76216b65a757452c30712611ca0837fc2e463fcc5782ee1dcb018171c0cead56"} Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.594828 4835 scope.go:117] "RemoveContainer" containerID="f142a1c724ec164b0b19f783b97b5ebcaa7e49a6ac8124a92c429f2aced82020" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.594689 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-s6gt5" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.600227 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.600588 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-r4nwn" event={"ID":"dc422bf1-9bfb-42fb-b964-e0717259857b","Type":"ContainerDied","Data":"8dc0b76e5213425ea9f676225eb18f568987f0a15eacdbbfa0682d1fdfa65701"} Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.639140 4835 scope.go:117] "RemoveContainer" containerID="5328938cfab77d7070c27671dcccacc3f1dd947ac295e25c4820651d4d49c720" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.648071 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-s6gt5"] Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.658247 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-s6gt5"] Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.669566 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-r4nwn"] Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.672008 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-r4nwn"] Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.684482 4835 scope.go:117] "RemoveContainer" containerID="95673e5358d36b47c1e48b271a3c17a61a17ebc7c46665b23b98bc1f5b013609" Jan 29 15:54:26 crc kubenswrapper[4835]: I0129 15:54:26.701754 4835 scope.go:117] "RemoveContainer" containerID="a6815e235eeb17a95b6c772864122d9f38dcdd678f33732ce16c2ce0f1dd5a08" Jan 29 15:54:27 crc kubenswrapper[4835]: I0129 15:54:27.609110 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-c8klz" event={"ID":"b115ffa4-1248-4584-bfcd-6ecfa552ef9f","Type":"ContainerStarted","Data":"a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995"} Jan 29 15:54:27 crc kubenswrapper[4835]: I0129 15:54:27.609619 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:27 crc kubenswrapper[4835]: I0129 15:54:27.613008 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" event={"ID":"6c992b4e-e5e9-4242-bace-70251be4386a","Type":"ContainerStarted","Data":"4398b299daa2fb044b2522fe9531e85bbd8647a6f91bd08c542df61c4bf29006"} Jan 29 15:54:27 crc kubenswrapper[4835]: I0129 15:54:27.613194 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:27 crc kubenswrapper[4835]: I0129 15:54:27.633133 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878659675-c8klz" podStartSLOduration=4.633111614 podStartE2EDuration="4.633111614s" podCreationTimestamp="2026-01-29 15:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:54:27.629131235 +0000 UTC m=+1569.822174909" watchObservedRunningTime="2026-01-29 15:54:27.633111614 +0000 UTC m=+1569.826155268" Jan 29 15:54:27 crc kubenswrapper[4835]: I0129 15:54:27.651948 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" podStartSLOduration=4.651926801 podStartE2EDuration="4.651926801s" podCreationTimestamp="2026-01-29 15:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:54:27.648766143 +0000 UTC m=+1569.841809837" watchObservedRunningTime="2026-01-29 15:54:27.651926801 +0000 UTC m=+1569.844970455" Jan 29 15:54:28 crc kubenswrapper[4835]: I0129 15:54:28.510817 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a537592e-b4b7-4871-8990-57229798f6b1" path="/var/lib/kubelet/pods/a537592e-b4b7-4871-8990-57229798f6b1/volumes" Jan 29 15:54:28 crc kubenswrapper[4835]: I0129 15:54:28.511726 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" path="/var/lib/kubelet/pods/dc422bf1-9bfb-42fb-b964-e0717259857b/volumes" Jan 29 15:54:31 crc kubenswrapper[4835]: I0129 15:54:31.672651 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.323719 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.364744 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-g6pdg"] Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.365014 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="dnsmasq-dns" containerID="cri-o://4398b299daa2fb044b2522fe9531e85bbd8647a6f91bd08c542df61c4bf29006" gracePeriod=10 Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.365743 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.450282 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkwvp"] Jan 29 15:54:33 crc kubenswrapper[4835]: E0129 15:54:33.450678 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerName="init" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.450692 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerName="init" Jan 29 15:54:33 crc kubenswrapper[4835]: E0129 15:54:33.450769 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a537592e-b4b7-4871-8990-57229798f6b1" containerName="dnsmasq-dns" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.450781 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a537592e-b4b7-4871-8990-57229798f6b1" containerName="dnsmasq-dns" Jan 29 15:54:33 crc kubenswrapper[4835]: E0129 15:54:33.450798 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a537592e-b4b7-4871-8990-57229798f6b1" containerName="init" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.450805 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a537592e-b4b7-4871-8990-57229798f6b1" containerName="init" Jan 29 15:54:33 crc kubenswrapper[4835]: E0129 15:54:33.450825 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerName="dnsmasq-dns" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.450833 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerName="dnsmasq-dns" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.450994 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc422bf1-9bfb-42fb-b964-e0717259857b" containerName="dnsmasq-dns" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.451006 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a537592e-b4b7-4871-8990-57229798f6b1" containerName="dnsmasq-dns" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.451879 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.465476 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkwvp"] Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.570900 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-config\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.571003 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.571397 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.571456 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.571661 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqjrf\" (UniqueName: \"kubernetes.io/projected/58b6d24b-00d3-4fb3-921e-9b2671776618-kube-api-access-jqjrf\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.666662 4835 generic.go:334] "Generic (PLEG): container finished" podID="6c992b4e-e5e9-4242-bace-70251be4386a" containerID="4398b299daa2fb044b2522fe9531e85bbd8647a6f91bd08c542df61c4bf29006" exitCode=0 Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.666743 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" event={"ID":"6c992b4e-e5e9-4242-bace-70251be4386a","Type":"ContainerDied","Data":"4398b299daa2fb044b2522fe9531e85bbd8647a6f91bd08c542df61c4bf29006"} Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.673802 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.673879 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.673911 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.674019 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqjrf\" (UniqueName: \"kubernetes.io/projected/58b6d24b-00d3-4fb3-921e-9b2671776618-kube-api-access-jqjrf\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: E0129 15:54:33.674048 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.674066 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-config\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.677778 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.677982 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.685617 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.686017 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-config\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.700819 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqjrf\" (UniqueName: \"kubernetes.io/projected/58b6d24b-00d3-4fb3-921e-9b2671776618-kube-api-access-jqjrf\") pod \"dnsmasq-dns-67fdf7998c-vkwvp\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.768659 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:33 crc kubenswrapper[4835]: I0129 15:54:33.784626 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.283202 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.385202 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-dns-svc\") pod \"6c992b4e-e5e9-4242-bace-70251be4386a\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.393072 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-sb\") pod \"6c992b4e-e5e9-4242-bace-70251be4386a\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.393190 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxzrb\" (UniqueName: \"kubernetes.io/projected/6c992b4e-e5e9-4242-bace-70251be4386a-kube-api-access-xxzrb\") pod \"6c992b4e-e5e9-4242-bace-70251be4386a\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.393266 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-nb\") pod \"6c992b4e-e5e9-4242-bace-70251be4386a\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.393300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-config\") pod \"6c992b4e-e5e9-4242-bace-70251be4386a\" (UID: \"6c992b4e-e5e9-4242-bace-70251be4386a\") " Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.398981 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c992b4e-e5e9-4242-bace-70251be4386a-kube-api-access-xxzrb" (OuterVolumeSpecName: "kube-api-access-xxzrb") pod "6c992b4e-e5e9-4242-bace-70251be4386a" (UID: "6c992b4e-e5e9-4242-bace-70251be4386a"). InnerVolumeSpecName "kube-api-access-xxzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.495477 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxzrb\" (UniqueName: \"kubernetes.io/projected/6c992b4e-e5e9-4242-bace-70251be4386a-kube-api-access-xxzrb\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.532884 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c992b4e-e5e9-4242-bace-70251be4386a" (UID: "6c992b4e-e5e9-4242-bace-70251be4386a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.574960 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:54:34 crc kubenswrapper[4835]: E0129 15:54:34.575543 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="init" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.575592 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="init" Jan 29 15:54:34 crc kubenswrapper[4835]: E0129 15:54:34.575652 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="dnsmasq-dns" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.575664 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="dnsmasq-dns" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.575935 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="dnsmasq-dns" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.579220 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c992b4e-e5e9-4242-bace-70251be4386a" (UID: "6c992b4e-e5e9-4242-bace-70251be4386a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.581494 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c992b4e-e5e9-4242-bace-70251be4386a" (UID: "6c992b4e-e5e9-4242-bace-70251be4386a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.582363 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.584782 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8k2sh" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.584988 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.589650 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.592269 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.596982 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.597008 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.597017 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.601230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-config" (OuterVolumeSpecName: "config") pod "6c992b4e-e5e9-4242-bace-70251be4386a" (UID: "6c992b4e-e5e9-4242-bace-70251be4386a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.609451 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:54:34 crc kubenswrapper[4835]: W0129 15:54:34.654015 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58b6d24b_00d3_4fb3_921e_9b2671776618.slice/crio-b257d581af23587e28cba2c0d53269ab8b5c87046fb8f9d3e85fdd7fa74356de WatchSource:0}: Error finding container b257d581af23587e28cba2c0d53269ab8b5c87046fb8f9d3e85fdd7fa74356de: Status 404 returned error can't find the container with id b257d581af23587e28cba2c0d53269ab8b5c87046fb8f9d3e85fdd7fa74356de Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.655551 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkwvp"] Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.679292 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" event={"ID":"6c992b4e-e5e9-4242-bace-70251be4386a","Type":"ContainerDied","Data":"1428ec4a5ccd095ddaadc24a16d973d4401f3fc51d6231a173a4dae06bd37e15"} Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.679342 4835 scope.go:117] "RemoveContainer" containerID="4398b299daa2fb044b2522fe9531e85bbd8647a6f91bd08c542df61c4bf29006" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.679539 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.682836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" event={"ID":"58b6d24b-00d3-4fb3-921e-9b2671776618","Type":"ContainerStarted","Data":"b257d581af23587e28cba2c0d53269ab8b5c87046fb8f9d3e85fdd7fa74356de"} Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698241 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-cache\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698658 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698726 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698764 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-lock\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698797 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc7f633-4eb5-481c-9e81-0598c584cc82-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdg2v\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-kube-api-access-cdg2v\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.698888 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c992b4e-e5e9-4242-bace-70251be4386a-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.742188 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-g6pdg"] Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.749453 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-g6pdg"] Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800214 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800310 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800349 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-lock\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800368 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc7f633-4eb5-481c-9e81-0598c584cc82-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800392 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdg2v\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-kube-api-access-cdg2v\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800439 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-cache\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800552 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: E0129 15:54:34.800583 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:54:34 crc kubenswrapper[4835]: E0129 15:54:34.800627 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:54:34 crc kubenswrapper[4835]: E0129 15:54:34.800682 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift podName:cfc7f633-4eb5-481c-9e81-0598c584cc82 nodeName:}" failed. No retries permitted until 2026-01-29 15:54:35.300663748 +0000 UTC m=+1577.493707402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift") pod "swift-storage-0" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82") : configmap "swift-ring-files" not found Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.800996 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-cache\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.801337 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-lock\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.807169 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc7f633-4eb5-481c-9e81-0598c584cc82-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.818818 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdg2v\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-kube-api-access-cdg2v\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:34 crc kubenswrapper[4835]: I0129 15:54:34.854516 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.133332 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-z2g6x"] Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.134707 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.136995 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.137042 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.139875 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.154298 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-z2g6x"] Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207194 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-scripts\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207252 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-swiftconf\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207293 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mq8d\" (UniqueName: \"kubernetes.io/projected/25eb2a12-67b0-4a75-9923-9b31dec57982-kube-api-access-8mq8d\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207318 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25eb2a12-67b0-4a75-9923-9b31dec57982-etc-swift\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-ring-data-devices\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-combined-ca-bundle\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.207877 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-dispersionconf\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309499 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-dispersionconf\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309564 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-scripts\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-swiftconf\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mq8d\" (UniqueName: \"kubernetes.io/projected/25eb2a12-67b0-4a75-9923-9b31dec57982-kube-api-access-8mq8d\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309659 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25eb2a12-67b0-4a75-9923-9b31dec57982-etc-swift\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309708 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-ring-data-devices\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309734 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-combined-ca-bundle\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.309780 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:35 crc kubenswrapper[4835]: E0129 15:54:35.309977 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:54:35 crc kubenswrapper[4835]: E0129 15:54:35.309991 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:54:35 crc kubenswrapper[4835]: E0129 15:54:35.310041 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift podName:cfc7f633-4eb5-481c-9e81-0598c584cc82 nodeName:}" failed. No retries permitted until 2026-01-29 15:54:36.31002405 +0000 UTC m=+1578.503067704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift") pod "swift-storage-0" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82") : configmap "swift-ring-files" not found Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.310743 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25eb2a12-67b0-4a75-9923-9b31dec57982-etc-swift\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.310990 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-scripts\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.311052 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-ring-data-devices\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.314828 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-dispersionconf\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.318204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-combined-ca-bundle\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.324202 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-swiftconf\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.328615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mq8d\" (UniqueName: \"kubernetes.io/projected/25eb2a12-67b0-4a75-9923-9b31dec57982-kube-api-access-8mq8d\") pod \"swift-ring-rebalance-z2g6x\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:35 crc kubenswrapper[4835]: I0129 15:54:35.455047 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:36 crc kubenswrapper[4835]: I0129 15:54:36.324946 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:36 crc kubenswrapper[4835]: E0129 15:54:36.325166 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:54:36 crc kubenswrapper[4835]: E0129 15:54:36.325326 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:54:36 crc kubenswrapper[4835]: E0129 15:54:36.325383 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift podName:cfc7f633-4eb5-481c-9e81-0598c584cc82 nodeName:}" failed. No retries permitted until 2026-01-29 15:54:38.325365618 +0000 UTC m=+1580.518409272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift") pod "swift-storage-0" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82") : configmap "swift-ring-files" not found Jan 29 15:54:36 crc kubenswrapper[4835]: I0129 15:54:36.501332 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" path="/var/lib/kubelet/pods/6c992b4e-e5e9-4242-bace-70251be4386a/volumes" Jan 29 15:54:36 crc kubenswrapper[4835]: I0129 15:54:36.710259 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14f53619-6432-463f-b0df-5998c1f3298d","Type":"ContainerStarted","Data":"1c460abfe7dfc5d912ef11f69c1789679aced0cf5dd1d1c2a054756009ac4f3a"} Jan 29 15:54:36 crc kubenswrapper[4835]: I0129 15:54:36.712032 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3","Type":"ContainerStarted","Data":"22e7fb3c4f9ff91459d857bf70ed22149138014d698e91d8efaeea0dac7b2a71"} Jan 29 15:54:36 crc kubenswrapper[4835]: I0129 15:54:36.827161 4835 scope.go:117] "RemoveContainer" containerID="377d4e28d98ee1f75772a16b5d8a5b459ae5d8aefde000f162ff756c34c04cf6" Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.339399 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-z2g6x"] Jan 29 15:54:37 crc kubenswrapper[4835]: W0129 15:54:37.346104 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25eb2a12_67b0_4a75_9923_9b31dec57982.slice/crio-b6bd19da4a9fd82938bdc7e42160c59d79f34f65bab8309055e97b793bad1eab WatchSource:0}: Error finding container b6bd19da4a9fd82938bdc7e42160c59d79f34f65bab8309055e97b793bad1eab: Status 404 returned error can't find the container with id b6bd19da4a9fd82938bdc7e42160c59d79f34f65bab8309055e97b793bad1eab Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.719328 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z2g6x" event={"ID":"25eb2a12-67b0-4a75-9923-9b31dec57982","Type":"ContainerStarted","Data":"b6bd19da4a9fd82938bdc7e42160c59d79f34f65bab8309055e97b793bad1eab"} Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.721675 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gpfb4" event={"ID":"7e7c01a6-4ad4-4b01-ab23-176e12337f1d","Type":"ContainerStarted","Data":"ab287bc3d69fa692e9cd2585dfd495de5681f84ae38e21c4cac121117ad30151"} Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.723313 4835 generic.go:334] "Generic (PLEG): container finished" podID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerID="4e26dcc70d3859384de4d1f2e60bb8c3f74f3e5b77b0f88815de6a1ab09bb8ac" exitCode=0 Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.723394 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" event={"ID":"58b6d24b-00d3-4fb3-921e-9b2671776618","Type":"ContainerDied","Data":"4e26dcc70d3859384de4d1f2e60bb8c3f74f3e5b77b0f88815de6a1ab09bb8ac"} Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.740899 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gpfb4" podStartSLOduration=1.9847068220000001 podStartE2EDuration="14.740883703s" podCreationTimestamp="2026-01-29 15:54:23 +0000 UTC" firstStartedPulling="2026-01-29 15:54:24.107481967 +0000 UTC m=+1566.300525631" lastFinishedPulling="2026-01-29 15:54:36.863658868 +0000 UTC m=+1579.056702512" observedRunningTime="2026-01-29 15:54:37.740487044 +0000 UTC m=+1579.933530718" watchObservedRunningTime="2026-01-29 15:54:37.740883703 +0000 UTC m=+1579.933927357" Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.818662 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=28.899379758 podStartE2EDuration="1m1.818628116s" podCreationTimestamp="2026-01-29 15:53:36 +0000 UTC" firstStartedPulling="2026-01-29 15:54:01.173878833 +0000 UTC m=+1543.366922487" lastFinishedPulling="2026-01-29 15:54:34.093127171 +0000 UTC m=+1576.286170845" observedRunningTime="2026-01-29 15:54:37.8099594 +0000 UTC m=+1580.003003054" watchObservedRunningTime="2026-01-29 15:54:37.818628116 +0000 UTC m=+1580.011671810" Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.848221 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.917453228 podStartE2EDuration="58.848188551s" podCreationTimestamp="2026-01-29 15:53:39 +0000 UTC" firstStartedPulling="2026-01-29 15:54:01.173919624 +0000 UTC m=+1543.366963278" lastFinishedPulling="2026-01-29 15:54:34.104654937 +0000 UTC m=+1576.297698601" observedRunningTime="2026-01-29 15:54:37.843857033 +0000 UTC m=+1580.036900687" watchObservedRunningTime="2026-01-29 15:54:37.848188551 +0000 UTC m=+1580.041232205" Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.908738 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.908784 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 15:54:37 crc kubenswrapper[4835]: I0129 15:54:37.964109 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.323289 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.361197 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:38 crc kubenswrapper[4835]: E0129 15:54:38.361440 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:54:38 crc kubenswrapper[4835]: E0129 15:54:38.361492 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:54:38 crc kubenswrapper[4835]: E0129 15:54:38.361570 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift podName:cfc7f633-4eb5-481c-9e81-0598c584cc82 nodeName:}" failed. No retries permitted until 2026-01-29 15:54:42.361541121 +0000 UTC m=+1584.554584775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift") pod "swift-storage-0" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82") : configmap "swift-ring-files" not found Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.397094 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.732810 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" event={"ID":"58b6d24b-00d3-4fb3-921e-9b2671776618","Type":"ContainerStarted","Data":"d3aa460fbb5daecf35c017919cb7b5ef5ad94eb778a851f73bfaba821cdf84b4"} Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.733579 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.733734 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.760615 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" podStartSLOduration=5.760599101 podStartE2EDuration="5.760599101s" podCreationTimestamp="2026-01-29 15:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:54:38.754681084 +0000 UTC m=+1580.947724738" watchObservedRunningTime="2026-01-29 15:54:38.760599101 +0000 UTC m=+1580.953642755" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.776029 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 15:54:38 crc kubenswrapper[4835]: I0129 15:54:38.781752 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.040927 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-g6pdg" podUID="6c992b4e-e5e9-4242-bace-70251be4386a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.075549 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.082569 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.084951 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.085240 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.085431 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.086977 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rf4gl" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.106571 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182496 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-255wc\" (UniqueName: \"kubernetes.io/projected/0d707fad-0945-4d95-a1ac-3fcf77e60566-kube-api-access-255wc\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182542 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-config\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182587 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-scripts\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.182892 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284423 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-255wc\" (UniqueName: \"kubernetes.io/projected/0d707fad-0945-4d95-a1ac-3fcf77e60566-kube-api-access-255wc\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284547 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-config\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284587 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284632 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284686 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-scripts\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.284740 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.286975 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.293038 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-config\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.295010 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.295238 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-scripts\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.299428 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.310258 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.350236 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-255wc\" (UniqueName: \"kubernetes.io/projected/0d707fad-0945-4d95-a1ac-3fcf77e60566-kube-api-access-255wc\") pod \"ovn-northd-0\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: E0129 15:54:39.354445 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0cc44f_ffb4_4d53_8219_a3ce2bd31e05.slice/crio-cab68b1c20e1eeb639188943fc6dfaec2ed8876d8e5fed7d07634355f89cc27c.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.401964 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.740596 4835 generic.go:334] "Generic (PLEG): container finished" podID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerID="cab68b1c20e1eeb639188943fc6dfaec2ed8876d8e5fed7d07634355f89cc27c" exitCode=0 Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.740655 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05","Type":"ContainerDied","Data":"cab68b1c20e1eeb639188943fc6dfaec2ed8876d8e5fed7d07634355f89cc27c"} Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.745380 4835 generic.go:334] "Generic (PLEG): container finished" podID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerID="159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87" exitCode=0 Jan 29 15:54:39 crc kubenswrapper[4835]: I0129 15:54:39.746245 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0b8ebff-50e7-4e05-96f8-099448fe550f","Type":"ContainerDied","Data":"159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87"} Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.174611 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 15:54:42 crc kubenswrapper[4835]: W0129 15:54:42.179596 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d707fad_0945_4d95_a1ac_3fcf77e60566.slice/crio-216bb6121f2ef28e5ca30f14201a902a3645f4bfbf57e8aeae8e438a3c82ad61 WatchSource:0}: Error finding container 216bb6121f2ef28e5ca30f14201a902a3645f4bfbf57e8aeae8e438a3c82ad61: Status 404 returned error can't find the container with id 216bb6121f2ef28e5ca30f14201a902a3645f4bfbf57e8aeae8e438a3c82ad61 Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.382544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:42 crc kubenswrapper[4835]: E0129 15:54:42.382767 4835 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 15:54:42 crc kubenswrapper[4835]: E0129 15:54:42.383098 4835 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 15:54:42 crc kubenswrapper[4835]: E0129 15:54:42.383176 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift podName:cfc7f633-4eb5-481c-9e81-0598c584cc82 nodeName:}" failed. No retries permitted until 2026-01-29 15:54:50.383157917 +0000 UTC m=+1592.576201571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift") pod "swift-storage-0" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82") : configmap "swift-ring-files" not found Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.774774 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0b8ebff-50e7-4e05-96f8-099448fe550f","Type":"ContainerStarted","Data":"9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd"} Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.777982 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0d707fad-0945-4d95-a1ac-3fcf77e60566","Type":"ContainerStarted","Data":"216bb6121f2ef28e5ca30f14201a902a3645f4bfbf57e8aeae8e438a3c82ad61"} Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.780706 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05","Type":"ContainerStarted","Data":"de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56"} Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.782737 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z2g6x" event={"ID":"25eb2a12-67b0-4a75-9923-9b31dec57982","Type":"ContainerStarted","Data":"aa6eae13e0634e1a486531e39af03d0f843ce795cea3a7a057c2290823b9b226"} Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.801611 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.822635393 podStartE2EDuration="1m14.801594639s" podCreationTimestamp="2026-01-29 15:53:28 +0000 UTC" firstStartedPulling="2026-01-29 15:53:30.728231097 +0000 UTC m=+1512.921274751" lastFinishedPulling="2026-01-29 15:54:21.707190343 +0000 UTC m=+1563.900233997" observedRunningTime="2026-01-29 15:54:42.795052636 +0000 UTC m=+1584.988096290" watchObservedRunningTime="2026-01-29 15:54:42.801594639 +0000 UTC m=+1584.994638293" Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.820702 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-z2g6x" podStartSLOduration=3.279594724 podStartE2EDuration="7.820686093s" podCreationTimestamp="2026-01-29 15:54:35 +0000 UTC" firstStartedPulling="2026-01-29 15:54:37.348322235 +0000 UTC m=+1579.541365899" lastFinishedPulling="2026-01-29 15:54:41.889413614 +0000 UTC m=+1584.082457268" observedRunningTime="2026-01-29 15:54:42.816830167 +0000 UTC m=+1585.009873821" watchObservedRunningTime="2026-01-29 15:54:42.820686093 +0000 UTC m=+1585.013729747" Jan 29 15:54:42 crc kubenswrapper[4835]: I0129 15:54:42.848066 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.801917211 podStartE2EDuration="1m13.848048713s" podCreationTimestamp="2026-01-29 15:53:29 +0000 UTC" firstStartedPulling="2026-01-29 15:53:31.93486552 +0000 UTC m=+1514.127909174" lastFinishedPulling="2026-01-29 15:54:20.980997012 +0000 UTC m=+1563.174040676" observedRunningTime="2026-01-29 15:54:42.841567302 +0000 UTC m=+1585.034610986" watchObservedRunningTime="2026-01-29 15:54:42.848048713 +0000 UTC m=+1585.041092367" Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.769608 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.794064 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0d707fad-0945-4d95-a1ac-3fcf77e60566","Type":"ContainerStarted","Data":"f8216ff353bcb5dadd1afc0099c19fd068e4e203e3468e4a67e19d4e8e285ff2"} Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.794105 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0d707fad-0945-4d95-a1ac-3fcf77e60566","Type":"ContainerStarted","Data":"49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6"} Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.794142 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.828109 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-c8klz"] Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.828359 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878659675-c8klz" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerName="dnsmasq-dns" containerID="cri-o://a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995" gracePeriod=10 Jan 29 15:54:43 crc kubenswrapper[4835]: I0129 15:54:43.833679 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.901642805 podStartE2EDuration="4.833662252s" podCreationTimestamp="2026-01-29 15:54:39 +0000 UTC" firstStartedPulling="2026-01-29 15:54:42.181333961 +0000 UTC m=+1584.374377615" lastFinishedPulling="2026-01-29 15:54:43.113353408 +0000 UTC m=+1585.306397062" observedRunningTime="2026-01-29 15:54:43.825730045 +0000 UTC m=+1586.018773709" watchObservedRunningTime="2026-01-29 15:54:43.833662252 +0000 UTC m=+1586.026705906" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.316454 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.435014 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-ovsdbserver-nb\") pod \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.435149 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-dns-svc\") pod \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.435217 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-config\") pod \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.435272 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msmpl\" (UniqueName: \"kubernetes.io/projected/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-kube-api-access-msmpl\") pod \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\" (UID: \"b115ffa4-1248-4584-bfcd-6ecfa552ef9f\") " Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.445058 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-kube-api-access-msmpl" (OuterVolumeSpecName: "kube-api-access-msmpl") pod "b115ffa4-1248-4584-bfcd-6ecfa552ef9f" (UID: "b115ffa4-1248-4584-bfcd-6ecfa552ef9f"). InnerVolumeSpecName "kube-api-access-msmpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.478854 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-config" (OuterVolumeSpecName: "config") pod "b115ffa4-1248-4584-bfcd-6ecfa552ef9f" (UID: "b115ffa4-1248-4584-bfcd-6ecfa552ef9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.481152 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b115ffa4-1248-4584-bfcd-6ecfa552ef9f" (UID: "b115ffa4-1248-4584-bfcd-6ecfa552ef9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.483757 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b115ffa4-1248-4584-bfcd-6ecfa552ef9f" (UID: "b115ffa4-1248-4584-bfcd-6ecfa552ef9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.537723 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.537760 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.537770 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msmpl\" (UniqueName: \"kubernetes.io/projected/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-kube-api-access-msmpl\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.537782 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b115ffa4-1248-4584-bfcd-6ecfa552ef9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.800263 4835 generic.go:334] "Generic (PLEG): container finished" podID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerID="a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995" exitCode=0 Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.800300 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-c8klz" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.800340 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-c8klz" event={"ID":"b115ffa4-1248-4584-bfcd-6ecfa552ef9f","Type":"ContainerDied","Data":"a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995"} Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.800381 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-c8klz" event={"ID":"b115ffa4-1248-4584-bfcd-6ecfa552ef9f","Type":"ContainerDied","Data":"eb85c0f15164228cd2fac654cbea5b0c25ad54170d43730ab02a7aedf58caa59"} Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.800399 4835 scope.go:117] "RemoveContainer" containerID="a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.823705 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-c8klz"] Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.826567 4835 scope.go:117] "RemoveContainer" containerID="6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.844724 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-c8klz"] Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.851781 4835 scope.go:117] "RemoveContainer" containerID="a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995" Jan 29 15:54:44 crc kubenswrapper[4835]: E0129 15:54:44.852886 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995\": container with ID starting with a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995 not found: ID does not exist" containerID="a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.852922 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995"} err="failed to get container status \"a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995\": rpc error: code = NotFound desc = could not find container \"a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995\": container with ID starting with a629e6789d262a175cfe4fddbedc94a3b883df9405dbe228fac08d74d14c4995 not found: ID does not exist" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.852946 4835 scope.go:117] "RemoveContainer" containerID="6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf" Jan 29 15:54:44 crc kubenswrapper[4835]: E0129 15:54:44.853336 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf\": container with ID starting with 6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf not found: ID does not exist" containerID="6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf" Jan 29 15:54:44 crc kubenswrapper[4835]: I0129 15:54:44.853379 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf"} err="failed to get container status \"6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf\": rpc error: code = NotFound desc = could not find container \"6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf\": container with ID starting with 6aabeed7510c2e330e859d96140984e7a875eb7590e6b9b7d525298bc225faaf not found: ID does not exist" Jan 29 15:54:46 crc kubenswrapper[4835]: E0129 15:54:46.498649 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:54:46 crc kubenswrapper[4835]: I0129 15:54:46.505045 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" path="/var/lib/kubelet/pods/b115ffa4-1248-4584-bfcd-6ecfa552ef9f/volumes" Jan 29 15:54:49 crc kubenswrapper[4835]: I0129 15:54:49.839384 4835 generic.go:334] "Generic (PLEG): container finished" podID="25eb2a12-67b0-4a75-9923-9b31dec57982" containerID="aa6eae13e0634e1a486531e39af03d0f843ce795cea3a7a057c2290823b9b226" exitCode=0 Jan 29 15:54:49 crc kubenswrapper[4835]: I0129 15:54:49.839496 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z2g6x" event={"ID":"25eb2a12-67b0-4a75-9923-9b31dec57982","Type":"ContainerDied","Data":"aa6eae13e0634e1a486531e39af03d0f843ce795cea3a7a057c2290823b9b226"} Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.148275 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.149107 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.250111 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.431459 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.437448 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"swift-storage-0\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " pod="openstack/swift-storage-0" Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.510436 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 15:54:50 crc kubenswrapper[4835]: I0129 15:54:50.922902 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.146203 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.224041 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.298550 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-4dgc2"] Jan 29 15:54:51 crc kubenswrapper[4835]: E0129 15:54:51.299028 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerName="dnsmasq-dns" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.299053 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerName="dnsmasq-dns" Jan 29 15:54:51 crc kubenswrapper[4835]: E0129 15:54:51.299086 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25eb2a12-67b0-4a75-9923-9b31dec57982" containerName="swift-ring-rebalance" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.299098 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="25eb2a12-67b0-4a75-9923-9b31dec57982" containerName="swift-ring-rebalance" Jan 29 15:54:51 crc kubenswrapper[4835]: E0129 15:54:51.299119 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerName="init" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.299126 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerName="init" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.299344 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="25eb2a12-67b0-4a75-9923-9b31dec57982" containerName="swift-ring-rebalance" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.299361 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b115ffa4-1248-4584-bfcd-6ecfa552ef9f" containerName="dnsmasq-dns" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.300207 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.307046 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-815b-account-create-update-ffcjk"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.308281 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.318837 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.325544 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4dgc2"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.336610 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-815b-account-create-update-ffcjk"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.342748 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.342791 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.387532 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mq8d\" (UniqueName: \"kubernetes.io/projected/25eb2a12-67b0-4a75-9923-9b31dec57982-kube-api-access-8mq8d\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.387679 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25eb2a12-67b0-4a75-9923-9b31dec57982-etc-swift\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.387748 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-ring-data-devices\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.387836 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-swiftconf\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.387865 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-combined-ca-bundle\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.387950 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-scripts\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.388007 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-dispersionconf\") pod \"25eb2a12-67b0-4a75-9923-9b31dec57982\" (UID: \"25eb2a12-67b0-4a75-9923-9b31dec57982\") " Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.389951 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25eb2a12-67b0-4a75-9923-9b31dec57982-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.390612 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.393727 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25eb2a12-67b0-4a75-9923-9b31dec57982-kube-api-access-8mq8d" (OuterVolumeSpecName: "kube-api-access-8mq8d") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "kube-api-access-8mq8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.397637 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.410141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.418272 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-scripts" (OuterVolumeSpecName: "scripts") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.418969 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25eb2a12-67b0-4a75-9923-9b31dec57982" (UID: "25eb2a12-67b0-4a75-9923-9b31dec57982"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.434297 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.489695 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-operator-scripts\") pod \"keystone-db-create-4dgc2\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qm2b\" (UniqueName: \"kubernetes.io/projected/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-kube-api-access-4qm2b\") pod \"keystone-db-create-4dgc2\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490166 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2975bccb-ca08-41b3-9086-9868a7f64e8b-operator-scripts\") pod \"keystone-815b-account-create-update-ffcjk\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490247 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz5gp\" (UniqueName: \"kubernetes.io/projected/2975bccb-ca08-41b3-9086-9868a7f64e8b-kube-api-access-zz5gp\") pod \"keystone-815b-account-create-update-ffcjk\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490344 4835 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490361 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mq8d\" (UniqueName: \"kubernetes.io/projected/25eb2a12-67b0-4a75-9923-9b31dec57982-kube-api-access-8mq8d\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490374 4835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25eb2a12-67b0-4a75-9923-9b31dec57982-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490386 4835 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490399 4835 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490409 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25eb2a12-67b0-4a75-9923-9b31dec57982-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.490419 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25eb2a12-67b0-4a75-9923-9b31dec57982-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.501175 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wl44f"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.502170 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.510180 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wl44f"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.592149 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-operator-scripts\") pod \"keystone-db-create-4dgc2\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.592198 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qm2b\" (UniqueName: \"kubernetes.io/projected/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-kube-api-access-4qm2b\") pod \"keystone-db-create-4dgc2\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.592242 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2975bccb-ca08-41b3-9086-9868a7f64e8b-operator-scripts\") pod \"keystone-815b-account-create-update-ffcjk\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.592305 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz5gp\" (UniqueName: \"kubernetes.io/projected/2975bccb-ca08-41b3-9086-9868a7f64e8b-kube-api-access-zz5gp\") pod \"keystone-815b-account-create-update-ffcjk\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.592915 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-operator-scripts\") pod \"keystone-db-create-4dgc2\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.593625 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2975bccb-ca08-41b3-9086-9868a7f64e8b-operator-scripts\") pod \"keystone-815b-account-create-update-ffcjk\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.594917 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1390-account-create-update-5xf62"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.596327 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.604827 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1390-account-create-update-5xf62"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.605518 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.611796 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz5gp\" (UniqueName: \"kubernetes.io/projected/2975bccb-ca08-41b3-9086-9868a7f64e8b-kube-api-access-zz5gp\") pod \"keystone-815b-account-create-update-ffcjk\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.614743 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qm2b\" (UniqueName: \"kubernetes.io/projected/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-kube-api-access-4qm2b\") pod \"keystone-db-create-4dgc2\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.615004 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.615067 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.615118 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.615828 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.615891 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" gracePeriod=600 Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.623726 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.633978 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.693923 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg6p2\" (UniqueName: \"kubernetes.io/projected/55a710ef-d5fc-4912-967c-ea1ca55cc761-kube-api-access-rg6p2\") pod \"placement-1390-account-create-update-5xf62\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.694002 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a710ef-d5fc-4912-967c-ea1ca55cc761-operator-scripts\") pod \"placement-1390-account-create-update-5xf62\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.694119 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmn7\" (UniqueName: \"kubernetes.io/projected/a7478452-6b29-4016-b823-43e509045fe4-kube-api-access-2tmn7\") pod \"placement-db-create-wl44f\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.694218 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7478452-6b29-4016-b823-43e509045fe4-operator-scripts\") pod \"placement-db-create-wl44f\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: E0129 15:54:51.777623 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.795527 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg6p2\" (UniqueName: \"kubernetes.io/projected/55a710ef-d5fc-4912-967c-ea1ca55cc761-kube-api-access-rg6p2\") pod \"placement-1390-account-create-update-5xf62\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.795586 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a710ef-d5fc-4912-967c-ea1ca55cc761-operator-scripts\") pod \"placement-1390-account-create-update-5xf62\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.795646 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tmn7\" (UniqueName: \"kubernetes.io/projected/a7478452-6b29-4016-b823-43e509045fe4-kube-api-access-2tmn7\") pod \"placement-db-create-wl44f\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.795729 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7478452-6b29-4016-b823-43e509045fe4-operator-scripts\") pod \"placement-db-create-wl44f\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.796822 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7478452-6b29-4016-b823-43e509045fe4-operator-scripts\") pod \"placement-db-create-wl44f\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.805660 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a710ef-d5fc-4912-967c-ea1ca55cc761-operator-scripts\") pod \"placement-1390-account-create-update-5xf62\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.822054 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg6p2\" (UniqueName: \"kubernetes.io/projected/55a710ef-d5fc-4912-967c-ea1ca55cc761-kube-api-access-rg6p2\") pod \"placement-1390-account-create-update-5xf62\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.824416 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tmn7\" (UniqueName: \"kubernetes.io/projected/a7478452-6b29-4016-b823-43e509045fe4-kube-api-access-2tmn7\") pod \"placement-db-create-wl44f\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " pod="openstack/placement-db-create-wl44f" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.830374 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9w77s"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.831645 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9w77s" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.840360 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9w77s"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.859461 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"98b91e7cdafe41f98b42b92d53c44e5d5e8f3e01a3618edec724696da407f832"} Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.862626 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" exitCode=0 Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.862738 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085"} Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.862776 4835 scope.go:117] "RemoveContainer" containerID="4b0dd5999a96771f391d295e38b4e02038c721d5806f4dd1294274ca9a5198a3" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.864002 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:54:51 crc kubenswrapper[4835]: E0129 15:54:51.864418 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.874954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z2g6x" event={"ID":"25eb2a12-67b0-4a75-9923-9b31dec57982","Type":"ContainerDied","Data":"b6bd19da4a9fd82938bdc7e42160c59d79f34f65bab8309055e97b793bad1eab"} Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.875006 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6bd19da4a9fd82938bdc7e42160c59d79f34f65bab8309055e97b793bad1eab" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.875808 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z2g6x" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.912734 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.934679 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a171-account-create-update-5vthf"] Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.936667 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.944516 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 15:54:51 crc kubenswrapper[4835]: I0129 15:54:51.971074 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a171-account-create-update-5vthf"] Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.008639 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ttb\" (UniqueName: \"kubernetes.io/projected/4635b73c-551a-4eb6-a2f0-344e968ea9a8-kube-api-access-z4ttb\") pod \"glance-db-create-9w77s\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.008729 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4635b73c-551a-4eb6-a2f0-344e968ea9a8-operator-scripts\") pod \"glance-db-create-9w77s\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.022330 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.086736 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wvpc4" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" probeResult="failure" output=< Jan 29 15:54:52 crc kubenswrapper[4835]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 15:54:52 crc kubenswrapper[4835]: > Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.110650 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4635b73c-551a-4eb6-a2f0-344e968ea9a8-operator-scripts\") pod \"glance-db-create-9w77s\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.110716 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmgh4\" (UniqueName: \"kubernetes.io/projected/aa656191-3c36-4639-9e82-de113cae7c7d-kube-api-access-pmgh4\") pod \"glance-a171-account-create-update-5vthf\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.110769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa656191-3c36-4639-9e82-de113cae7c7d-operator-scripts\") pod \"glance-a171-account-create-update-5vthf\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.110850 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ttb\" (UniqueName: \"kubernetes.io/projected/4635b73c-551a-4eb6-a2f0-344e968ea9a8-kube-api-access-z4ttb\") pod \"glance-db-create-9w77s\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.111767 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4635b73c-551a-4eb6-a2f0-344e968ea9a8-operator-scripts\") pod \"glance-db-create-9w77s\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.119644 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wl44f" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.128769 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ttb\" (UniqueName: \"kubernetes.io/projected/4635b73c-551a-4eb6-a2f0-344e968ea9a8-kube-api-access-z4ttb\") pod \"glance-db-create-9w77s\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.155282 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9w77s" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.212446 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmgh4\" (UniqueName: \"kubernetes.io/projected/aa656191-3c36-4639-9e82-de113cae7c7d-kube-api-access-pmgh4\") pod \"glance-a171-account-create-update-5vthf\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.212631 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa656191-3c36-4639-9e82-de113cae7c7d-operator-scripts\") pod \"glance-a171-account-create-update-5vthf\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.213497 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa656191-3c36-4639-9e82-de113cae7c7d-operator-scripts\") pod \"glance-a171-account-create-update-5vthf\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.231873 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmgh4\" (UniqueName: \"kubernetes.io/projected/aa656191-3c36-4639-9e82-de113cae7c7d-kube-api-access-pmgh4\") pod \"glance-a171-account-create-update-5vthf\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.258101 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.273766 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-4dgc2"] Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.289781 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-815b-account-create-update-ffcjk"] Jan 29 15:54:52 crc kubenswrapper[4835]: W0129 15:54:52.291389 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e9472b2_4e26_42c0_8bf3_e3c21ba2f411.slice/crio-8366fd3e4056ee17c1155cb5c91624bf96f027131cd14a7d7f89483603bdf89d WatchSource:0}: Error finding container 8366fd3e4056ee17c1155cb5c91624bf96f027131cd14a7d7f89483603bdf89d: Status 404 returned error can't find the container with id 8366fd3e4056ee17c1155cb5c91624bf96f027131cd14a7d7f89483603bdf89d Jan 29 15:54:52 crc kubenswrapper[4835]: W0129 15:54:52.296925 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2975bccb_ca08_41b3_9086_9868a7f64e8b.slice/crio-5df0f5f76ab23fbeac9b6e13335ca2e112d5eed0e51e18ea5f4140c504f7c603 WatchSource:0}: Error finding container 5df0f5f76ab23fbeac9b6e13335ca2e112d5eed0e51e18ea5f4140c504f7c603: Status 404 returned error can't find the container with id 5df0f5f76ab23fbeac9b6e13335ca2e112d5eed0e51e18ea5f4140c504f7c603 Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.429287 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1390-account-create-update-5xf62"] Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.591086 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wl44f"] Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.669666 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9w77s"] Jan 29 15:54:52 crc kubenswrapper[4835]: W0129 15:54:52.695741 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55a710ef_d5fc_4912_967c_ea1ca55cc761.slice/crio-dfa35e9bb8ab78bcb75548dfa4720488b2f6874fbebe9719e5f5fa66673e71a7 WatchSource:0}: Error finding container dfa35e9bb8ab78bcb75548dfa4720488b2f6874fbebe9719e5f5fa66673e71a7: Status 404 returned error can't find the container with id dfa35e9bb8ab78bcb75548dfa4720488b2f6874fbebe9719e5f5fa66673e71a7 Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.771976 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a171-account-create-update-5vthf"] Jan 29 15:54:52 crc kubenswrapper[4835]: W0129 15:54:52.787203 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa656191_3c36_4639_9e82_de113cae7c7d.slice/crio-bb992092f6c7907a5af241ea20d570c324ab09f5ee1b60aecb71c7c818a6bc2c WatchSource:0}: Error finding container bb992092f6c7907a5af241ea20d570c324ab09f5ee1b60aecb71c7c818a6bc2c: Status 404 returned error can't find the container with id bb992092f6c7907a5af241ea20d570c324ab09f5ee1b60aecb71c7c818a6bc2c Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.884176 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1390-account-create-update-5xf62" event={"ID":"55a710ef-d5fc-4912-967c-ea1ca55cc761","Type":"ContainerStarted","Data":"dfa35e9bb8ab78bcb75548dfa4720488b2f6874fbebe9719e5f5fa66673e71a7"} Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.886063 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wl44f" event={"ID":"a7478452-6b29-4016-b823-43e509045fe4","Type":"ContainerStarted","Data":"d9aa420fedfdc87d7989d96c6fbc7f1b7128caa729772e53acf8627b810610b0"} Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.887937 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dgc2" event={"ID":"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411","Type":"ContainerStarted","Data":"8366fd3e4056ee17c1155cb5c91624bf96f027131cd14a7d7f89483603bdf89d"} Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.889018 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9w77s" event={"ID":"4635b73c-551a-4eb6-a2f0-344e968ea9a8","Type":"ContainerStarted","Data":"4168b7ff99e9e82a5aee2a94502d826afc10cbdc369af3a8f98f689456cfcdb9"} Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.891552 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a171-account-create-update-5vthf" event={"ID":"aa656191-3c36-4639-9e82-de113cae7c7d","Type":"ContainerStarted","Data":"bb992092f6c7907a5af241ea20d570c324ab09f5ee1b60aecb71c7c818a6bc2c"} Jan 29 15:54:52 crc kubenswrapper[4835]: I0129 15:54:52.894196 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-815b-account-create-update-ffcjk" event={"ID":"2975bccb-ca08-41b3-9086-9868a7f64e8b","Type":"ContainerStarted","Data":"5df0f5f76ab23fbeac9b6e13335ca2e112d5eed0e51e18ea5f4140c504f7c603"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.902640 4835 generic.go:334] "Generic (PLEG): container finished" podID="1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" containerID="b3db866f5c5c10811d36b1ba316f1d95224827373d3c1fd25a1d9ebbccd563cf" exitCode=0 Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.902812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dgc2" event={"ID":"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411","Type":"ContainerDied","Data":"b3db866f5c5c10811d36b1ba316f1d95224827373d3c1fd25a1d9ebbccd563cf"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.920248 4835 generic.go:334] "Generic (PLEG): container finished" podID="4635b73c-551a-4eb6-a2f0-344e968ea9a8" containerID="b517a0ac1fdb51643a31b57c32c6da8fb261ae8d833ada91622687657049b87f" exitCode=0 Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.920347 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9w77s" event={"ID":"4635b73c-551a-4eb6-a2f0-344e968ea9a8","Type":"ContainerDied","Data":"b517a0ac1fdb51643a31b57c32c6da8fb261ae8d833ada91622687657049b87f"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.926688 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"29693bd067c29bcef68d5b953a18c82bdc942785a44206abd862b321b7c77fb6"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.926728 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"f47a12bee37ff4ad1416b9e4d6055c6ceddc1d33304721e7f0bcb0d682158067"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.926743 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"7d90e6ba25c8c94653671eba222fef93b9b2446541dcca56e78ee9231c96445a"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.928510 4835 generic.go:334] "Generic (PLEG): container finished" podID="aa656191-3c36-4639-9e82-de113cae7c7d" containerID="860549624ab3eb45bfaf2a2e1971e8956ea637e261aa5d1e607638b4f557b735" exitCode=0 Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.928629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a171-account-create-update-5vthf" event={"ID":"aa656191-3c36-4639-9e82-de113cae7c7d","Type":"ContainerDied","Data":"860549624ab3eb45bfaf2a2e1971e8956ea637e261aa5d1e607638b4f557b735"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.931754 4835 generic.go:334] "Generic (PLEG): container finished" podID="2975bccb-ca08-41b3-9086-9868a7f64e8b" containerID="d6cacb620c0e0a6f084b5de1844f2f96598061e8275a483841c42e1012c4362f" exitCode=0 Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.931848 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-815b-account-create-update-ffcjk" event={"ID":"2975bccb-ca08-41b3-9086-9868a7f64e8b","Type":"ContainerDied","Data":"d6cacb620c0e0a6f084b5de1844f2f96598061e8275a483841c42e1012c4362f"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.940673 4835 generic.go:334] "Generic (PLEG): container finished" podID="55a710ef-d5fc-4912-967c-ea1ca55cc761" containerID="4d952b2ba38b92990ffb25b5cfacaf982a6c3071c60affccc48a62e10a9d7b68" exitCode=0 Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.940737 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1390-account-create-update-5xf62" event={"ID":"55a710ef-d5fc-4912-967c-ea1ca55cc761","Type":"ContainerDied","Data":"4d952b2ba38b92990ffb25b5cfacaf982a6c3071c60affccc48a62e10a9d7b68"} Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.942495 4835 generic.go:334] "Generic (PLEG): container finished" podID="a7478452-6b29-4016-b823-43e509045fe4" containerID="ca7ed96849faaee25f26093745e02b1969733ec2783d12f9f2c28050a39785ae" exitCode=0 Jan 29 15:54:53 crc kubenswrapper[4835]: I0129 15:54:53.942538 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wl44f" event={"ID":"a7478452-6b29-4016-b823-43e509045fe4","Type":"ContainerDied","Data":"ca7ed96849faaee25f26093745e02b1969733ec2783d12f9f2c28050a39785ae"} Jan 29 15:54:54 crc kubenswrapper[4835]: I0129 15:54:54.952963 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"ea3655dc8836f21bb4e2d129275d9c13c513c97506d8486147669096ca156531"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.350525 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.482253 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a710ef-d5fc-4912-967c-ea1ca55cc761-operator-scripts\") pod \"55a710ef-d5fc-4912-967c-ea1ca55cc761\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.482434 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg6p2\" (UniqueName: \"kubernetes.io/projected/55a710ef-d5fc-4912-967c-ea1ca55cc761-kube-api-access-rg6p2\") pod \"55a710ef-d5fc-4912-967c-ea1ca55cc761\" (UID: \"55a710ef-d5fc-4912-967c-ea1ca55cc761\") " Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.485115 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a710ef-d5fc-4912-967c-ea1ca55cc761-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55a710ef-d5fc-4912-967c-ea1ca55cc761" (UID: "55a710ef-d5fc-4912-967c-ea1ca55cc761"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.492725 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a710ef-d5fc-4912-967c-ea1ca55cc761-kube-api-access-rg6p2" (OuterVolumeSpecName: "kube-api-access-rg6p2") pod "55a710ef-d5fc-4912-967c-ea1ca55cc761" (UID: "55a710ef-d5fc-4912-967c-ea1ca55cc761"). InnerVolumeSpecName "kube-api-access-rg6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.584063 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg6p2\" (UniqueName: \"kubernetes.io/projected/55a710ef-d5fc-4912-967c-ea1ca55cc761-kube-api-access-rg6p2\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.584089 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55a710ef-d5fc-4912-967c-ea1ca55cc761-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.964440 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1390-account-create-update-5xf62" event={"ID":"55a710ef-d5fc-4912-967c-ea1ca55cc761","Type":"ContainerDied","Data":"dfa35e9bb8ab78bcb75548dfa4720488b2f6874fbebe9719e5f5fa66673e71a7"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.964750 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfa35e9bb8ab78bcb75548dfa4720488b2f6874fbebe9719e5f5fa66673e71a7" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.964454 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-5xf62" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.967931 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wl44f" event={"ID":"a7478452-6b29-4016-b823-43e509045fe4","Type":"ContainerDied","Data":"d9aa420fedfdc87d7989d96c6fbc7f1b7128caa729772e53acf8627b810610b0"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.967973 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9aa420fedfdc87d7989d96c6fbc7f1b7128caa729772e53acf8627b810610b0" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.969392 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-4dgc2" event={"ID":"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411","Type":"ContainerDied","Data":"8366fd3e4056ee17c1155cb5c91624bf96f027131cd14a7d7f89483603bdf89d"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.969528 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8366fd3e4056ee17c1155cb5c91624bf96f027131cd14a7d7f89483603bdf89d" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.971041 4835 generic.go:334] "Generic (PLEG): container finished" podID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerID="64169b97629fcadaf1de1bd1f7d89edfecc0da8978b11c86487715e1aaeefbfb" exitCode=0 Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.971169 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"427d05dd-22a3-4557-b9b3-9b5a39ea4662","Type":"ContainerDied","Data":"64169b97629fcadaf1de1bd1f7d89edfecc0da8978b11c86487715e1aaeefbfb"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.975862 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9w77s" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.977862 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9w77s" event={"ID":"4635b73c-551a-4eb6-a2f0-344e968ea9a8","Type":"ContainerDied","Data":"4168b7ff99e9e82a5aee2a94502d826afc10cbdc369af3a8f98f689456cfcdb9"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.978000 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4168b7ff99e9e82a5aee2a94502d826afc10cbdc369af3a8f98f689456cfcdb9" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.979530 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a171-account-create-update-5vthf" event={"ID":"aa656191-3c36-4639-9e82-de113cae7c7d","Type":"ContainerDied","Data":"bb992092f6c7907a5af241ea20d570c324ab09f5ee1b60aecb71c7c818a6bc2c"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.979572 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb992092f6c7907a5af241ea20d570c324ab09f5ee1b60aecb71c7c818a6bc2c" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.982792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-815b-account-create-update-ffcjk" event={"ID":"2975bccb-ca08-41b3-9086-9868a7f64e8b","Type":"ContainerDied","Data":"5df0f5f76ab23fbeac9b6e13335ca2e112d5eed0e51e18ea5f4140c504f7c603"} Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.982836 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df0f5f76ab23fbeac9b6e13335ca2e112d5eed0e51e18ea5f4140c504f7c603" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.986010 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:55 crc kubenswrapper[4835]: I0129 15:54:55.995287 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.067052 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.082913 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wl44f" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.095114 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmgh4\" (UniqueName: \"kubernetes.io/projected/aa656191-3c36-4639-9e82-de113cae7c7d-kube-api-access-pmgh4\") pod \"aa656191-3c36-4639-9e82-de113cae7c7d\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.095189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-operator-scripts\") pod \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.095255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ttb\" (UniqueName: \"kubernetes.io/projected/4635b73c-551a-4eb6-a2f0-344e968ea9a8-kube-api-access-z4ttb\") pod \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.095293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qm2b\" (UniqueName: \"kubernetes.io/projected/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-kube-api-access-4qm2b\") pod \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\" (UID: \"1e9472b2-4e26-42c0-8bf3-e3c21ba2f411\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.095395 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa656191-3c36-4639-9e82-de113cae7c7d-operator-scripts\") pod \"aa656191-3c36-4639-9e82-de113cae7c7d\" (UID: \"aa656191-3c36-4639-9e82-de113cae7c7d\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.095482 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4635b73c-551a-4eb6-a2f0-344e968ea9a8-operator-scripts\") pod \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\" (UID: \"4635b73c-551a-4eb6-a2f0-344e968ea9a8\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.098048 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" (UID: "1e9472b2-4e26-42c0-8bf3-e3c21ba2f411"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.099962 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa656191-3c36-4639-9e82-de113cae7c7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa656191-3c36-4639-9e82-de113cae7c7d" (UID: "aa656191-3c36-4639-9e82-de113cae7c7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.102398 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4635b73c-551a-4eb6-a2f0-344e968ea9a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4635b73c-551a-4eb6-a2f0-344e968ea9a8" (UID: "4635b73c-551a-4eb6-a2f0-344e968ea9a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.103637 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa656191-3c36-4639-9e82-de113cae7c7d-kube-api-access-pmgh4" (OuterVolumeSpecName: "kube-api-access-pmgh4") pod "aa656191-3c36-4639-9e82-de113cae7c7d" (UID: "aa656191-3c36-4639-9e82-de113cae7c7d"). InnerVolumeSpecName "kube-api-access-pmgh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.104772 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4635b73c-551a-4eb6-a2f0-344e968ea9a8-kube-api-access-z4ttb" (OuterVolumeSpecName: "kube-api-access-z4ttb") pod "4635b73c-551a-4eb6-a2f0-344e968ea9a8" (UID: "4635b73c-551a-4eb6-a2f0-344e968ea9a8"). InnerVolumeSpecName "kube-api-access-z4ttb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.109340 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-kube-api-access-4qm2b" (OuterVolumeSpecName: "kube-api-access-4qm2b") pod "1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" (UID: "1e9472b2-4e26-42c0-8bf3-e3c21ba2f411"). InnerVolumeSpecName "kube-api-access-4qm2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.197514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz5gp\" (UniqueName: \"kubernetes.io/projected/2975bccb-ca08-41b3-9086-9868a7f64e8b-kube-api-access-zz5gp\") pod \"2975bccb-ca08-41b3-9086-9868a7f64e8b\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.197596 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tmn7\" (UniqueName: \"kubernetes.io/projected/a7478452-6b29-4016-b823-43e509045fe4-kube-api-access-2tmn7\") pod \"a7478452-6b29-4016-b823-43e509045fe4\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.197622 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7478452-6b29-4016-b823-43e509045fe4-operator-scripts\") pod \"a7478452-6b29-4016-b823-43e509045fe4\" (UID: \"a7478452-6b29-4016-b823-43e509045fe4\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.197667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2975bccb-ca08-41b3-9086-9868a7f64e8b-operator-scripts\") pod \"2975bccb-ca08-41b3-9086-9868a7f64e8b\" (UID: \"2975bccb-ca08-41b3-9086-9868a7f64e8b\") " Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198049 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmgh4\" (UniqueName: \"kubernetes.io/projected/aa656191-3c36-4639-9e82-de113cae7c7d-kube-api-access-pmgh4\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198064 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198073 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ttb\" (UniqueName: \"kubernetes.io/projected/4635b73c-551a-4eb6-a2f0-344e968ea9a8-kube-api-access-z4ttb\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198084 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qm2b\" (UniqueName: \"kubernetes.io/projected/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411-kube-api-access-4qm2b\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198092 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa656191-3c36-4639-9e82-de113cae7c7d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198100 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4635b73c-551a-4eb6-a2f0-344e968ea9a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198253 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7478452-6b29-4016-b823-43e509045fe4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7478452-6b29-4016-b823-43e509045fe4" (UID: "a7478452-6b29-4016-b823-43e509045fe4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.198420 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2975bccb-ca08-41b3-9086-9868a7f64e8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2975bccb-ca08-41b3-9086-9868a7f64e8b" (UID: "2975bccb-ca08-41b3-9086-9868a7f64e8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.200770 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7478452-6b29-4016-b823-43e509045fe4-kube-api-access-2tmn7" (OuterVolumeSpecName: "kube-api-access-2tmn7") pod "a7478452-6b29-4016-b823-43e509045fe4" (UID: "a7478452-6b29-4016-b823-43e509045fe4"). InnerVolumeSpecName "kube-api-access-2tmn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.201684 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2975bccb-ca08-41b3-9086-9868a7f64e8b-kube-api-access-zz5gp" (OuterVolumeSpecName: "kube-api-access-zz5gp") pod "2975bccb-ca08-41b3-9086-9868a7f64e8b" (UID: "2975bccb-ca08-41b3-9086-9868a7f64e8b"). InnerVolumeSpecName "kube-api-access-zz5gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.300440 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz5gp\" (UniqueName: \"kubernetes.io/projected/2975bccb-ca08-41b3-9086-9868a7f64e8b-kube-api-access-zz5gp\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.300493 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tmn7\" (UniqueName: \"kubernetes.io/projected/a7478452-6b29-4016-b823-43e509045fe4-kube-api-access-2tmn7\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.300523 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7478452-6b29-4016-b823-43e509045fe4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.300533 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2975bccb-ca08-41b3-9086-9868a7f64e8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.993239 4835 generic.go:334] "Generic (PLEG): container finished" podID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerID="b8597f16a2219b2b8c34c3748733464d2ee66f76e90cbef7f856667a18a6d90d" exitCode=0 Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.993346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"372d1dd7-8b24-4652-b6bb-bce75af50e4e","Type":"ContainerDied","Data":"b8597f16a2219b2b8c34c3748733464d2ee66f76e90cbef7f856667a18a6d90d"} Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.996893 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"427d05dd-22a3-4557-b9b3-9b5a39ea4662","Type":"ContainerStarted","Data":"3ee09d0ab84f95e0d34e071b677b6bc38c1151190177df01369eb3c4ce330847"} Jan 29 15:54:56 crc kubenswrapper[4835]: I0129 15:54:56.997828 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.001261 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a171-account-create-update-5vthf" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.001977 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"96326193929bfae4736118da0da7aa31ca82117388834ae8a9d814377a2df442"} Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.002036 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-ffcjk" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.002413 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-4dgc2" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.002804 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9w77s" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.003145 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wl44f" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.076637 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.54394177 podStartE2EDuration="1m31.076618044s" podCreationTimestamp="2026-01-29 15:53:26 +0000 UTC" firstStartedPulling="2026-01-29 15:53:28.170591701 +0000 UTC m=+1510.363635365" lastFinishedPulling="2026-01-29 15:54:21.703267985 +0000 UTC m=+1563.896311639" observedRunningTime="2026-01-29 15:54:57.055778866 +0000 UTC m=+1599.248822540" watchObservedRunningTime="2026-01-29 15:54:57.076618044 +0000 UTC m=+1599.269661698" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.090673 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wvpc4" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" probeResult="failure" output=< Jan 29 15:54:57 crc kubenswrapper[4835]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 15:54:57 crc kubenswrapper[4835]: > Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.132383 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.156156 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.367690 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wvpc4-config-mfwd6"] Jan 29 15:54:57 crc kubenswrapper[4835]: E0129 15:54:57.368535 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2975bccb-ca08-41b3-9086-9868a7f64e8b" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368555 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2975bccb-ca08-41b3-9086-9868a7f64e8b" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: E0129 15:54:57.368572 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7478452-6b29-4016-b823-43e509045fe4" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368579 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7478452-6b29-4016-b823-43e509045fe4" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: E0129 15:54:57.368599 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368606 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: E0129 15:54:57.368616 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55a710ef-d5fc-4912-967c-ea1ca55cc761" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368623 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a710ef-d5fc-4912-967c-ea1ca55cc761" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: E0129 15:54:57.368636 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa656191-3c36-4639-9e82-de113cae7c7d" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368642 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa656191-3c36-4639-9e82-de113cae7c7d" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: E0129 15:54:57.368655 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4635b73c-551a-4eb6-a2f0-344e968ea9a8" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368662 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4635b73c-551a-4eb6-a2f0-344e968ea9a8" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368848 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="55a710ef-d5fc-4912-967c-ea1ca55cc761" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368860 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7478452-6b29-4016-b823-43e509045fe4" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368879 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4635b73c-551a-4eb6-a2f0-344e968ea9a8" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368887 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa656191-3c36-4639-9e82-de113cae7c7d" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368898 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2975bccb-ca08-41b3-9086-9868a7f64e8b" containerName="mariadb-account-create-update" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.368910 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" containerName="mariadb-database-create" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.369529 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.373002 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.377064 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wvpc4-config-mfwd6"] Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.445222 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-scripts\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.445295 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.445425 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-additional-scripts\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.445491 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run-ovn\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.445555 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8slgd\" (UniqueName: \"kubernetes.io/projected/0d02a592-823e-4af0-91ff-c570915317eb-kube-api-access-8slgd\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.445580 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-log-ovn\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.546633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8slgd\" (UniqueName: \"kubernetes.io/projected/0d02a592-823e-4af0-91ff-c570915317eb-kube-api-access-8slgd\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.546699 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-log-ovn\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.546758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-scripts\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.546820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.546955 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-additional-scripts\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.546994 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run-ovn\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.547119 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-log-ovn\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.548038 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.548338 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run-ovn\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.549047 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-additional-scripts\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.549749 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-scripts\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.571144 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8slgd\" (UniqueName: \"kubernetes.io/projected/0d02a592-823e-4af0-91ff-c570915317eb-kube-api-access-8slgd\") pod \"ovn-controller-wvpc4-config-mfwd6\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:57 crc kubenswrapper[4835]: I0129 15:54:57.761973 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.011261 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"372d1dd7-8b24-4652-b6bb-bce75af50e4e","Type":"ContainerStarted","Data":"928d4065c80f7caf504725e2e34addde8c39c448d114f8f283af06dd5607f106"} Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.012633 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.019732 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"f95d08291a9c199d391d69a68653c5c74f0e8e27762eb83acb0bc55e0d66b495"} Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.019765 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"8fd88a76264762f5dddc732bba7fb82a2b2f0305de000da2121400725f7c2608"} Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.019775 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"474104e42dc8f9996c3980ed13200c007cff2281dd29f3028ef1e3636cd57e4d"} Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.051626 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.952501559 podStartE2EDuration="1m32.051609089s" podCreationTimestamp="2026-01-29 15:53:26 +0000 UTC" firstStartedPulling="2026-01-29 15:53:28.880975769 +0000 UTC m=+1511.074019433" lastFinishedPulling="2026-01-29 15:54:20.980083269 +0000 UTC m=+1563.173126963" observedRunningTime="2026-01-29 15:54:58.050796309 +0000 UTC m=+1600.243839963" watchObservedRunningTime="2026-01-29 15:54:58.051609089 +0000 UTC m=+1600.244652743" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.243080 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wvpc4-config-mfwd6"] Jan 29 15:54:58 crc kubenswrapper[4835]: W0129 15:54:58.250676 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d02a592_823e_4af0_91ff_c570915317eb.slice/crio-5296e2de63fc7556f8bd01ed22ab951555eaf0c3c6aa27b8d6732798f62deeb0 WatchSource:0}: Error finding container 5296e2de63fc7556f8bd01ed22ab951555eaf0c3c6aa27b8d6732798f62deeb0: Status 404 returned error can't find the container with id 5296e2de63fc7556f8bd01ed22ab951555eaf0c3c6aa27b8d6732798f62deeb0 Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.442901 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bz42w"] Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.444008 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.446915 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.450258 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bz42w"] Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.460147 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c67d4e-2268-4c38-aba0-a4d73e311f9b-operator-scripts\") pod \"root-account-create-update-bz42w\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.460218 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thml2\" (UniqueName: \"kubernetes.io/projected/25c67d4e-2268-4c38-aba0-a4d73e311f9b-kube-api-access-thml2\") pod \"root-account-create-update-bz42w\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.561870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thml2\" (UniqueName: \"kubernetes.io/projected/25c67d4e-2268-4c38-aba0-a4d73e311f9b-kube-api-access-thml2\") pod \"root-account-create-update-bz42w\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.562088 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c67d4e-2268-4c38-aba0-a4d73e311f9b-operator-scripts\") pod \"root-account-create-update-bz42w\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.563793 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c67d4e-2268-4c38-aba0-a4d73e311f9b-operator-scripts\") pod \"root-account-create-update-bz42w\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.595596 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thml2\" (UniqueName: \"kubernetes.io/projected/25c67d4e-2268-4c38-aba0-a4d73e311f9b-kube-api-access-thml2\") pod \"root-account-create-update-bz42w\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:58 crc kubenswrapper[4835]: I0129 15:54:58.774447 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bz42w" Jan 29 15:54:59 crc kubenswrapper[4835]: I0129 15:54:59.040859 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"cb6a17d1b963b37fe45bdd33c7dcb4010669d718352d5ad5ac97fb197646084b"} Jan 29 15:54:59 crc kubenswrapper[4835]: I0129 15:54:59.042814 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d02a592-823e-4af0-91ff-c570915317eb" containerID="7feda011de9a93ccfd38692539f0990c15e4ad836ec77c3c23cafc75462556a8" exitCode=0 Jan 29 15:54:59 crc kubenswrapper[4835]: I0129 15:54:59.042882 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-mfwd6" event={"ID":"0d02a592-823e-4af0-91ff-c570915317eb","Type":"ContainerDied","Data":"7feda011de9a93ccfd38692539f0990c15e4ad836ec77c3c23cafc75462556a8"} Jan 29 15:54:59 crc kubenswrapper[4835]: I0129 15:54:59.042941 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-mfwd6" event={"ID":"0d02a592-823e-4af0-91ff-c570915317eb","Type":"ContainerStarted","Data":"5296e2de63fc7556f8bd01ed22ab951555eaf0c3c6aa27b8d6732798f62deeb0"} Jan 29 15:54:59 crc kubenswrapper[4835]: I0129 15:54:59.257050 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bz42w"] Jan 29 15:54:59 crc kubenswrapper[4835]: W0129 15:54:59.275980 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c67d4e_2268_4c38_aba0_a4d73e311f9b.slice/crio-291633c1588e395be033380f60087e8a54aea3d643a95c64dfb2c46b09211c0b WatchSource:0}: Error finding container 291633c1588e395be033380f60087e8a54aea3d643a95c64dfb2c46b09211c0b: Status 404 returned error can't find the container with id 291633c1588e395be033380f60087e8a54aea3d643a95c64dfb2c46b09211c0b Jan 29 15:54:59 crc kubenswrapper[4835]: I0129 15:54:59.507411 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 15:54:59 crc kubenswrapper[4835]: E0129 15:54:59.790272 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c67d4e_2268_4c38_aba0_a4d73e311f9b.slice/crio-conmon-6e3e934cc0998fe138c66e7a286f17dfcf35ba6c9c1ae47bd90391a8d2a3505d.scope\": RecentStats: unable to find data in memory cache]" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.055036 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"497362da3cdc7b0ac0586533a3f77b36a12f4d7072bc9dcc5da2221d2561e5d3"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.056077 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"7ca1893c15edffa73e896d53f7933f0f6d17a150336d46d4a2fa86bcaf2cfdab"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.056163 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"0e9a70d0d3e573c08f1f05fea5e81d31b394892a0871babe4ea8df23e129049a"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.056230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"f3c885696e6da2f898a5b8c49181385ab9235ba5ae191d51cf760e67509c466e"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.056301 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"ad890ccba45d4c845dc515bcfedc4d4060e8c99cb10da916ae788e373d48865b"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.056395 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerStarted","Data":"1cb7a2f57c5116dd63939c6513e77a00a69a8410f016136f8f3113c4eaefb685"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.058089 4835 generic.go:334] "Generic (PLEG): container finished" podID="25c67d4e-2268-4c38-aba0-a4d73e311f9b" containerID="6e3e934cc0998fe138c66e7a286f17dfcf35ba6c9c1ae47bd90391a8d2a3505d" exitCode=0 Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.058249 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bz42w" event={"ID":"25c67d4e-2268-4c38-aba0-a4d73e311f9b","Type":"ContainerDied","Data":"6e3e934cc0998fe138c66e7a286f17dfcf35ba6c9c1ae47bd90391a8d2a3505d"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.058371 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bz42w" event={"ID":"25c67d4e-2268-4c38-aba0-a4d73e311f9b","Type":"ContainerStarted","Data":"291633c1588e395be033380f60087e8a54aea3d643a95c64dfb2c46b09211c0b"} Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.100763 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.530344937 podStartE2EDuration="27.100738255s" podCreationTimestamp="2026-01-29 15:54:33 +0000 UTC" firstStartedPulling="2026-01-29 15:54:51.160730082 +0000 UTC m=+1593.353773736" lastFinishedPulling="2026-01-29 15:54:58.7311234 +0000 UTC m=+1600.924167054" observedRunningTime="2026-01-29 15:55:00.090125971 +0000 UTC m=+1602.283169635" watchObservedRunningTime="2026-01-29 15:55:00.100738255 +0000 UTC m=+1602.293781919" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.392808 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bdffd66f-h8pdd"] Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.394230 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.396541 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.411335 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bdffd66f-h8pdd"] Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.441858 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:55:00 crc kubenswrapper[4835]: E0129 15:55:00.493124 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.510568 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-config\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.510633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.510675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-svc\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.510705 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.510783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kncp\" (UniqueName: \"kubernetes.io/projected/072352c5-f0e2-4c07-8501-e118357bcd7e-kube-api-access-4kncp\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.510859 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.611845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-scripts\") pod \"0d02a592-823e-4af0-91ff-c570915317eb\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.611908 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run\") pod \"0d02a592-823e-4af0-91ff-c570915317eb\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.611928 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-log-ovn\") pod \"0d02a592-823e-4af0-91ff-c570915317eb\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.611992 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run-ovn\") pod \"0d02a592-823e-4af0-91ff-c570915317eb\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612023 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-additional-scripts\") pod \"0d02a592-823e-4af0-91ff-c570915317eb\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8slgd\" (UniqueName: \"kubernetes.io/projected/0d02a592-823e-4af0-91ff-c570915317eb-kube-api-access-8slgd\") pod \"0d02a592-823e-4af0-91ff-c570915317eb\" (UID: \"0d02a592-823e-4af0-91ff-c570915317eb\") " Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612337 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612372 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kncp\" (UniqueName: \"kubernetes.io/projected/072352c5-f0e2-4c07-8501-e118357bcd7e-kube-api-access-4kncp\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612405 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612557 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-config\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612583 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.612658 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-svc\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.613527 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-svc\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.614403 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-scripts" (OuterVolumeSpecName: "scripts") pod "0d02a592-823e-4af0-91ff-c570915317eb" (UID: "0d02a592-823e-4af0-91ff-c570915317eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.614437 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run" (OuterVolumeSpecName: "var-run") pod "0d02a592-823e-4af0-91ff-c570915317eb" (UID: "0d02a592-823e-4af0-91ff-c570915317eb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.614456 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0d02a592-823e-4af0-91ff-c570915317eb" (UID: "0d02a592-823e-4af0-91ff-c570915317eb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.614489 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0d02a592-823e-4af0-91ff-c570915317eb" (UID: "0d02a592-823e-4af0-91ff-c570915317eb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.614693 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0d02a592-823e-4af0-91ff-c570915317eb" (UID: "0d02a592-823e-4af0-91ff-c570915317eb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.616737 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-sb\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.616871 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-swift-storage-0\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.616939 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-config\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.617228 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-nb\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.620062 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d02a592-823e-4af0-91ff-c570915317eb-kube-api-access-8slgd" (OuterVolumeSpecName: "kube-api-access-8slgd") pod "0d02a592-823e-4af0-91ff-c570915317eb" (UID: "0d02a592-823e-4af0-91ff-c570915317eb"). InnerVolumeSpecName "kube-api-access-8slgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.650939 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kncp\" (UniqueName: \"kubernetes.io/projected/072352c5-f0e2-4c07-8501-e118357bcd7e-kube-api-access-4kncp\") pod \"dnsmasq-dns-75bdffd66f-h8pdd\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.714411 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.714447 4835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.714456 4835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.714484 4835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0d02a592-823e-4af0-91ff-c570915317eb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.714497 4835 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0d02a592-823e-4af0-91ff-c570915317eb-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.714511 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8slgd\" (UniqueName: \"kubernetes.io/projected/0d02a592-823e-4af0-91ff-c570915317eb-kube-api-access-8slgd\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:00 crc kubenswrapper[4835]: I0129 15:55:00.756658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.068504 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-mfwd6" event={"ID":"0d02a592-823e-4af0-91ff-c570915317eb","Type":"ContainerDied","Data":"5296e2de63fc7556f8bd01ed22ab951555eaf0c3c6aa27b8d6732798f62deeb0"} Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.068556 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5296e2de63fc7556f8bd01ed22ab951555eaf0c3c6aa27b8d6732798f62deeb0" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.068618 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-mfwd6" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.207921 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bdffd66f-h8pdd"] Jan 29 15:55:01 crc kubenswrapper[4835]: W0129 15:55:01.214337 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod072352c5_f0e2_4c07_8501_e118357bcd7e.slice/crio-6bfe0417ebc6662d3caa2ae2f51a76ec0f574aeff969e673dac557a25108bbe1 WatchSource:0}: Error finding container 6bfe0417ebc6662d3caa2ae2f51a76ec0f574aeff969e673dac557a25108bbe1: Status 404 returned error can't find the container with id 6bfe0417ebc6662d3caa2ae2f51a76ec0f574aeff969e673dac557a25108bbe1 Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.411754 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bz42w" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.526358 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c67d4e-2268-4c38-aba0-a4d73e311f9b-operator-scripts\") pod \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.526549 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thml2\" (UniqueName: \"kubernetes.io/projected/25c67d4e-2268-4c38-aba0-a4d73e311f9b-kube-api-access-thml2\") pod \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\" (UID: \"25c67d4e-2268-4c38-aba0-a4d73e311f9b\") " Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.527020 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c67d4e-2268-4c38-aba0-a4d73e311f9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25c67d4e-2268-4c38-aba0-a4d73e311f9b" (UID: "25c67d4e-2268-4c38-aba0-a4d73e311f9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.542671 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c67d4e-2268-4c38-aba0-a4d73e311f9b-kube-api-access-thml2" (OuterVolumeSpecName: "kube-api-access-thml2") pod "25c67d4e-2268-4c38-aba0-a4d73e311f9b" (UID: "25c67d4e-2268-4c38-aba0-a4d73e311f9b"). InnerVolumeSpecName "kube-api-access-thml2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.543256 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wvpc4-config-mfwd6"] Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.550891 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wvpc4-config-mfwd6"] Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.629702 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c67d4e-2268-4c38-aba0-a4d73e311f9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.629730 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thml2\" (UniqueName: \"kubernetes.io/projected/25c67d4e-2268-4c38-aba0-a4d73e311f9b-kube-api-access-thml2\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.653768 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wvpc4-config-wwkbc"] Jan 29 15:55:01 crc kubenswrapper[4835]: E0129 15:55:01.654170 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d02a592-823e-4af0-91ff-c570915317eb" containerName="ovn-config" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.654188 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d02a592-823e-4af0-91ff-c570915317eb" containerName="ovn-config" Jan 29 15:55:01 crc kubenswrapper[4835]: E0129 15:55:01.654209 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c67d4e-2268-4c38-aba0-a4d73e311f9b" containerName="mariadb-account-create-update" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.654215 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c67d4e-2268-4c38-aba0-a4d73e311f9b" containerName="mariadb-account-create-update" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.654387 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c67d4e-2268-4c38-aba0-a4d73e311f9b" containerName="mariadb-account-create-update" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.654399 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d02a592-823e-4af0-91ff-c570915317eb" containerName="ovn-config" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.654992 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.657109 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.674237 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wvpc4-config-wwkbc"] Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.832323 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-scripts\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.832839 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64njp\" (UniqueName: \"kubernetes.io/projected/a69d1a4c-ae0a-4859-a800-3db10e7280d5-kube-api-access-64njp\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.832868 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.832888 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-log-ovn\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.833084 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-additional-scripts\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.833322 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run-ovn\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935456 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-additional-scripts\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935566 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run-ovn\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935598 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-scripts\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935675 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64njp\" (UniqueName: \"kubernetes.io/projected/a69d1a4c-ae0a-4859-a800-3db10e7280d5-kube-api-access-64njp\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935742 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-log-ovn\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.935976 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run-ovn\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.936065 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.936085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-log-ovn\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.936329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-additional-scripts\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.937738 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-scripts\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.962321 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64njp\" (UniqueName: \"kubernetes.io/projected/a69d1a4c-ae0a-4859-a800-3db10e7280d5-kube-api-access-64njp\") pod \"ovn-controller-wvpc4-config-wwkbc\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:01 crc kubenswrapper[4835]: I0129 15:55:01.971294 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.083679 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gnnnq"] Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.088725 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.092198 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.092254 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h6f4p" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.099457 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnnnq"] Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.105639 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wvpc4" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.119403 4835 generic.go:334] "Generic (PLEG): container finished" podID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerID="d486c746133c4c2e4445f045e011e1493dcb98380c30e9f30c9d4e8121cf946a" exitCode=0 Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.119532 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" event={"ID":"072352c5-f0e2-4c07-8501-e118357bcd7e","Type":"ContainerDied","Data":"d486c746133c4c2e4445f045e011e1493dcb98380c30e9f30c9d4e8121cf946a"} Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.119558 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" event={"ID":"072352c5-f0e2-4c07-8501-e118357bcd7e","Type":"ContainerStarted","Data":"6bfe0417ebc6662d3caa2ae2f51a76ec0f574aeff969e673dac557a25108bbe1"} Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.125555 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bz42w" event={"ID":"25c67d4e-2268-4c38-aba0-a4d73e311f9b","Type":"ContainerDied","Data":"291633c1588e395be033380f60087e8a54aea3d643a95c64dfb2c46b09211c0b"} Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.125778 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="291633c1588e395be033380f60087e8a54aea3d643a95c64dfb2c46b09211c0b" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.125821 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bz42w" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.240169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-db-sync-config-data\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.240297 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtsgq\" (UniqueName: \"kubernetes.io/projected/0830b0ce-038c-41ff-bed9-e4efa49ae35f-kube-api-access-jtsgq\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.240321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-combined-ca-bundle\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.240377 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-config-data\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.341521 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtsgq\" (UniqueName: \"kubernetes.io/projected/0830b0ce-038c-41ff-bed9-e4efa49ae35f-kube-api-access-jtsgq\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.341570 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-combined-ca-bundle\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.341610 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-config-data\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.341701 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-db-sync-config-data\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.345916 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-combined-ca-bundle\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.347904 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-config-data\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.364945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-db-sync-config-data\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.365067 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtsgq\" (UniqueName: \"kubernetes.io/projected/0830b0ce-038c-41ff-bed9-e4efa49ae35f-kube-api-access-jtsgq\") pod \"glance-db-sync-gnnnq\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.413576 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnnnq" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.477151 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wvpc4-config-wwkbc"] Jan 29 15:55:02 crc kubenswrapper[4835]: W0129 15:55:02.480287 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda69d1a4c_ae0a_4859_a800_3db10e7280d5.slice/crio-1c7adab274115abfb63ac990256014f2db836087103603d6caa0743cdc34381b WatchSource:0}: Error finding container 1c7adab274115abfb63ac990256014f2db836087103603d6caa0743cdc34381b: Status 404 returned error can't find the container with id 1c7adab274115abfb63ac990256014f2db836087103603d6caa0743cdc34381b Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.504553 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d02a592-823e-4af0-91ff-c570915317eb" path="/var/lib/kubelet/pods/0d02a592-823e-4af0-91ff-c570915317eb/volumes" Jan 29 15:55:02 crc kubenswrapper[4835]: I0129 15:55:02.958909 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gnnnq"] Jan 29 15:55:02 crc kubenswrapper[4835]: W0129 15:55:02.963724 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0830b0ce_038c_41ff_bed9_e4efa49ae35f.slice/crio-07e8ed5ec6d1eaae335755ff1f0eb989ed70a4e0dc5b28a38734efbaa3a14346 WatchSource:0}: Error finding container 07e8ed5ec6d1eaae335755ff1f0eb989ed70a4e0dc5b28a38734efbaa3a14346: Status 404 returned error can't find the container with id 07e8ed5ec6d1eaae335755ff1f0eb989ed70a4e0dc5b28a38734efbaa3a14346 Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.134759 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-wwkbc" event={"ID":"a69d1a4c-ae0a-4859-a800-3db10e7280d5","Type":"ContainerStarted","Data":"5740c3d748b1ad4713421af69e666a45167c69ae6198cdd359e44823932def4c"} Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.135202 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-wwkbc" event={"ID":"a69d1a4c-ae0a-4859-a800-3db10e7280d5","Type":"ContainerStarted","Data":"1c7adab274115abfb63ac990256014f2db836087103603d6caa0743cdc34381b"} Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.135973 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnnnq" event={"ID":"0830b0ce-038c-41ff-bed9-e4efa49ae35f","Type":"ContainerStarted","Data":"07e8ed5ec6d1eaae335755ff1f0eb989ed70a4e0dc5b28a38734efbaa3a14346"} Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.138111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" event={"ID":"072352c5-f0e2-4c07-8501-e118357bcd7e","Type":"ContainerStarted","Data":"9af6db8002bb1c810886d469801671396039c3d9f5118f544aa527d7141d7806"} Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.138323 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.161075 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podStartSLOduration=3.161055035 podStartE2EDuration="3.161055035s" podCreationTimestamp="2026-01-29 15:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:55:03.158620355 +0000 UTC m=+1605.351664009" watchObservedRunningTime="2026-01-29 15:55:03.161055035 +0000 UTC m=+1605.354098699" Jan 29 15:55:03 crc kubenswrapper[4835]: I0129 15:55:03.492636 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:55:03 crc kubenswrapper[4835]: E0129 15:55:03.492906 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:55:04 crc kubenswrapper[4835]: I0129 15:55:04.148184 4835 generic.go:334] "Generic (PLEG): container finished" podID="a69d1a4c-ae0a-4859-a800-3db10e7280d5" containerID="5740c3d748b1ad4713421af69e666a45167c69ae6198cdd359e44823932def4c" exitCode=0 Jan 29 15:55:04 crc kubenswrapper[4835]: I0129 15:55:04.149339 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-wwkbc" event={"ID":"a69d1a4c-ae0a-4859-a800-3db10e7280d5","Type":"ContainerDied","Data":"5740c3d748b1ad4713421af69e666a45167c69ae6198cdd359e44823932def4c"} Jan 29 15:55:04 crc kubenswrapper[4835]: I0129 15:55:04.973712 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bz42w"] Jan 29 15:55:04 crc kubenswrapper[4835]: I0129 15:55:04.982463 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bz42w"] Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.460428 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.594526 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-log-ovn\") pod \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595003 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64njp\" (UniqueName: \"kubernetes.io/projected/a69d1a4c-ae0a-4859-a800-3db10e7280d5-kube-api-access-64njp\") pod \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595054 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-scripts\") pod \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595076 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-additional-scripts\") pod \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run\") pod \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595110 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a69d1a4c-ae0a-4859-a800-3db10e7280d5" (UID: "a69d1a4c-ae0a-4859-a800-3db10e7280d5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595198 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run-ovn\") pod \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\" (UID: \"a69d1a4c-ae0a-4859-a800-3db10e7280d5\") " Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595251 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run" (OuterVolumeSpecName: "var-run") pod "a69d1a4c-ae0a-4859-a800-3db10e7280d5" (UID: "a69d1a4c-ae0a-4859-a800-3db10e7280d5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595361 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a69d1a4c-ae0a-4859-a800-3db10e7280d5" (UID: "a69d1a4c-ae0a-4859-a800-3db10e7280d5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595632 4835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595644 4835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595654 4835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a69d1a4c-ae0a-4859-a800-3db10e7280d5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.595921 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a69d1a4c-ae0a-4859-a800-3db10e7280d5" (UID: "a69d1a4c-ae0a-4859-a800-3db10e7280d5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.596374 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-scripts" (OuterVolumeSpecName: "scripts") pod "a69d1a4c-ae0a-4859-a800-3db10e7280d5" (UID: "a69d1a4c-ae0a-4859-a800-3db10e7280d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.613837 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69d1a4c-ae0a-4859-a800-3db10e7280d5-kube-api-access-64njp" (OuterVolumeSpecName: "kube-api-access-64njp") pod "a69d1a4c-ae0a-4859-a800-3db10e7280d5" (UID: "a69d1a4c-ae0a-4859-a800-3db10e7280d5"). InnerVolumeSpecName "kube-api-access-64njp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.697097 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64njp\" (UniqueName: \"kubernetes.io/projected/a69d1a4c-ae0a-4859-a800-3db10e7280d5-kube-api-access-64njp\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.697124 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:05.697132 4835 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a69d1a4c-ae0a-4859-a800-3db10e7280d5-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:06.187228 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4-config-wwkbc" event={"ID":"a69d1a4c-ae0a-4859-a800-3db10e7280d5","Type":"ContainerDied","Data":"1c7adab274115abfb63ac990256014f2db836087103603d6caa0743cdc34381b"} Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:06.187291 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4-config-wwkbc" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:06.187310 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c7adab274115abfb63ac990256014f2db836087103603d6caa0743cdc34381b" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:06.512459 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c67d4e-2268-4c38-aba0-a4d73e311f9b" path="/var/lib/kubelet/pods/25c67d4e-2268-4c38-aba0-a4d73e311f9b/volumes" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:06.553950 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wvpc4-config-wwkbc"] Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:06.566031 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wvpc4-config-wwkbc"] Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:07.660102 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:08.281300 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:08.506134 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69d1a4c-ae0a-4859-a800-3db10e7280d5" path="/var/lib/kubelet/pods/a69d1a4c-ae0a-4859-a800-3db10e7280d5/volumes" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:09.983084 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dlwrd"] Jan 29 15:55:09 crc kubenswrapper[4835]: E0129 15:55:09.983955 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69d1a4c-ae0a-4859-a800-3db10e7280d5" containerName="ovn-config" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:09.984124 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69d1a4c-ae0a-4859-a800-3db10e7280d5" containerName="ovn-config" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:09.984749 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69d1a4c-ae0a-4859-a800-3db10e7280d5" containerName="ovn-config" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:09.986542 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:09.989350 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 15:55:09 crc kubenswrapper[4835]: I0129 15:55:09.994803 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dlwrd"] Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.076983 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c4899ae-0dd0-4a8d-aeb2-65738205364c-operator-scripts\") pod \"root-account-create-update-dlwrd\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.077157 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nkj\" (UniqueName: \"kubernetes.io/projected/3c4899ae-0dd0-4a8d-aeb2-65738205364c-kube-api-access-b8nkj\") pod \"root-account-create-update-dlwrd\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.179620 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c4899ae-0dd0-4a8d-aeb2-65738205364c-operator-scripts\") pod \"root-account-create-update-dlwrd\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.179681 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8nkj\" (UniqueName: \"kubernetes.io/projected/3c4899ae-0dd0-4a8d-aeb2-65738205364c-kube-api-access-b8nkj\") pod \"root-account-create-update-dlwrd\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.180514 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c4899ae-0dd0-4a8d-aeb2-65738205364c-operator-scripts\") pod \"root-account-create-update-dlwrd\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.198927 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8nkj\" (UniqueName: \"kubernetes.io/projected/3c4899ae-0dd0-4a8d-aeb2-65738205364c-kube-api-access-b8nkj\") pod \"root-account-create-update-dlwrd\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.314385 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.758395 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.814260 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkwvp"] Jan 29 15:55:10 crc kubenswrapper[4835]: I0129 15:55:10.814781 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="dnsmasq-dns" containerID="cri-o://d3aa460fbb5daecf35c017919cb7b5ef5ad94eb778a851f73bfaba821cdf84b4" gracePeriod=10 Jan 29 15:55:12 crc kubenswrapper[4835]: I0129 15:55:12.241009 4835 generic.go:334] "Generic (PLEG): container finished" podID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerID="d3aa460fbb5daecf35c017919cb7b5ef5ad94eb778a851f73bfaba821cdf84b4" exitCode=0 Jan 29 15:55:12 crc kubenswrapper[4835]: I0129 15:55:12.241058 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" event={"ID":"58b6d24b-00d3-4fb3-921e-9b2671776618","Type":"ContainerDied","Data":"d3aa460fbb5daecf35c017919cb7b5ef5ad94eb778a851f73bfaba821cdf84b4"} Jan 29 15:55:13 crc kubenswrapper[4835]: I0129 15:55:13.769403 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.116:5353: connect: connection refused" Jan 29 15:55:16 crc kubenswrapper[4835]: E0129 15:55:16.086553 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:55:16 crc kubenswrapper[4835]: I0129 15:55:16.555069 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dlwrd"] Jan 29 15:55:17 crc kubenswrapper[4835]: I0129 15:55:17.286044 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dlwrd" event={"ID":"3c4899ae-0dd0-4a8d-aeb2-65738205364c","Type":"ContainerStarted","Data":"36302bf385dcddb89e66514e052620a7b9dd589ea5da6235352101cae5a20b2f"} Jan 29 15:55:17 crc kubenswrapper[4835]: I0129 15:55:17.492561 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:55:17 crc kubenswrapper[4835]: E0129 15:55:17.493256 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:55:17 crc kubenswrapper[4835]: I0129 15:55:17.658769 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 15:55:17 crc kubenswrapper[4835]: I0129 15:55:17.989110 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-d9gpw"] Jan 29 15:55:17 crc kubenswrapper[4835]: I0129 15:55:17.991291 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.008337 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d9gpw"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.122337 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hbk\" (UniqueName: \"kubernetes.io/projected/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-kube-api-access-h5hbk\") pod \"barbican-db-create-d9gpw\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.122648 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-operator-scripts\") pod \"barbican-db-create-d9gpw\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.208569 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-919a-account-create-update-gk65v"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.209920 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.212322 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.225979 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-919a-account-create-update-gk65v"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.228016 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-operator-scripts\") pod \"barbican-db-create-d9gpw\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.228080 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5hbk\" (UniqueName: \"kubernetes.io/projected/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-kube-api-access-h5hbk\") pod \"barbican-db-create-d9gpw\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.231057 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-operator-scripts\") pod \"barbican-db-create-d9gpw\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.253853 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5hbk\" (UniqueName: \"kubernetes.io/projected/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-kube-api-access-h5hbk\") pod \"barbican-db-create-d9gpw\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.288713 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.292697 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ggv7m"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.293777 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.318349 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7580-account-create-update-thn8w"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.319428 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.325996 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.327975 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ggv7m"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.331402 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnrg\" (UniqueName: \"kubernetes.io/projected/d90603cb-bccf-4838-b077-7d1fe9da2c34-kube-api-access-qmnrg\") pod \"barbican-919a-account-create-update-gk65v\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.331560 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90603cb-bccf-4838-b077-7d1fe9da2c34-operator-scripts\") pod \"barbican-919a-account-create-update-gk65v\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.337879 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7580-account-create-update-thn8w"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.351428 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.433407 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73e7878-399e-4442-9e36-a9653b4ed8c0-operator-scripts\") pod \"cinder-7580-account-create-update-thn8w\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.433478 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52smt\" (UniqueName: \"kubernetes.io/projected/c73e7878-399e-4442-9e36-a9653b4ed8c0-kube-api-access-52smt\") pod \"cinder-7580-account-create-update-thn8w\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.433499 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxg7\" (UniqueName: \"kubernetes.io/projected/fd9f8da0-19c5-4816-ae04-1981898172d7-kube-api-access-2dxg7\") pod \"cinder-db-create-ggv7m\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.433530 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd9f8da0-19c5-4816-ae04-1981898172d7-operator-scripts\") pod \"cinder-db-create-ggv7m\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.433633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmnrg\" (UniqueName: \"kubernetes.io/projected/d90603cb-bccf-4838-b077-7d1fe9da2c34-kube-api-access-qmnrg\") pod \"barbican-919a-account-create-update-gk65v\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.433728 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90603cb-bccf-4838-b077-7d1fe9da2c34-operator-scripts\") pod \"barbican-919a-account-create-update-gk65v\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.436421 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90603cb-bccf-4838-b077-7d1fe9da2c34-operator-scripts\") pod \"barbican-919a-account-create-update-gk65v\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.465544 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmnrg\" (UniqueName: \"kubernetes.io/projected/d90603cb-bccf-4838-b077-7d1fe9da2c34-kube-api-access-qmnrg\") pod \"barbican-919a-account-create-update-gk65v\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.479524 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-92dgb"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.487619 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-92dgb"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.488640 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.496014 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.496088 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.496297 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.496535 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxxtt" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.499683 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.522354 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5e45-account-create-update-6dnpd"] Jan 29 15:55:18 crc kubenswrapper[4835]: E0129 15:55:18.522785 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="init" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.522806 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="init" Jan 29 15:55:18 crc kubenswrapper[4835]: E0129 15:55:18.522841 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="dnsmasq-dns" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.522848 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="dnsmasq-dns" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.523009 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" containerName="dnsmasq-dns" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.523634 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.528405 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.534499 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5e45-account-create-update-6dnpd"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542236 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-combined-ca-bundle\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542332 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73e7878-399e-4442-9e36-a9653b4ed8c0-operator-scripts\") pod \"cinder-7580-account-create-update-thn8w\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542375 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52smt\" (UniqueName: \"kubernetes.io/projected/c73e7878-399e-4442-9e36-a9653b4ed8c0-kube-api-access-52smt\") pod \"cinder-7580-account-create-update-thn8w\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542411 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxg7\" (UniqueName: \"kubernetes.io/projected/fd9f8da0-19c5-4816-ae04-1981898172d7-kube-api-access-2dxg7\") pod \"cinder-db-create-ggv7m\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd9f8da0-19c5-4816-ae04-1981898172d7-operator-scripts\") pod \"cinder-db-create-ggv7m\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542494 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-config-data\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.542539 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gqp\" (UniqueName: \"kubernetes.io/projected/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-kube-api-access-q2gqp\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.543578 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd9f8da0-19c5-4816-ae04-1981898172d7-operator-scripts\") pod \"cinder-db-create-ggv7m\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.544592 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73e7878-399e-4442-9e36-a9653b4ed8c0-operator-scripts\") pod \"cinder-7580-account-create-update-thn8w\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.550364 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.573540 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52smt\" (UniqueName: \"kubernetes.io/projected/c73e7878-399e-4442-9e36-a9653b4ed8c0-kube-api-access-52smt\") pod \"cinder-7580-account-create-update-thn8w\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.598437 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxg7\" (UniqueName: \"kubernetes.io/projected/fd9f8da0-19c5-4816-ae04-1981898172d7-kube-api-access-2dxg7\") pod \"cinder-db-create-ggv7m\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.604585 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2n9r8"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.606417 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.635327 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2n9r8"] Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.645904 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-dns-svc\") pod \"58b6d24b-00d3-4fb3-921e-9b2671776618\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.645989 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-sb\") pod \"58b6d24b-00d3-4fb3-921e-9b2671776618\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.646015 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-nb\") pod \"58b6d24b-00d3-4fb3-921e-9b2671776618\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.646061 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqjrf\" (UniqueName: \"kubernetes.io/projected/58b6d24b-00d3-4fb3-921e-9b2671776618-kube-api-access-jqjrf\") pod \"58b6d24b-00d3-4fb3-921e-9b2671776618\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.646100 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-config\") pod \"58b6d24b-00d3-4fb3-921e-9b2671776618\" (UID: \"58b6d24b-00d3-4fb3-921e-9b2671776618\") " Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.646874 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-config-data\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.646975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gqp\" (UniqueName: \"kubernetes.io/projected/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-kube-api-access-q2gqp\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.647138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67r8j\" (UniqueName: \"kubernetes.io/projected/dd952bf2-34ba-40e4-af51-2fde882bb593-kube-api-access-67r8j\") pod \"neutron-5e45-account-create-update-6dnpd\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.647175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd952bf2-34ba-40e4-af51-2fde882bb593-operator-scripts\") pod \"neutron-5e45-account-create-update-6dnpd\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.647233 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-combined-ca-bundle\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.656854 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-combined-ca-bundle\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.661932 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-config-data\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.662662 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b6d24b-00d3-4fb3-921e-9b2671776618-kube-api-access-jqjrf" (OuterVolumeSpecName: "kube-api-access-jqjrf") pod "58b6d24b-00d3-4fb3-921e-9b2671776618" (UID: "58b6d24b-00d3-4fb3-921e-9b2671776618"). InnerVolumeSpecName "kube-api-access-jqjrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.675295 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gqp\" (UniqueName: \"kubernetes.io/projected/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-kube-api-access-q2gqp\") pod \"keystone-db-sync-92dgb\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.696105 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "58b6d24b-00d3-4fb3-921e-9b2671776618" (UID: "58b6d24b-00d3-4fb3-921e-9b2671776618"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.704952 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "58b6d24b-00d3-4fb3-921e-9b2671776618" (UID: "58b6d24b-00d3-4fb3-921e-9b2671776618"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.714126 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "58b6d24b-00d3-4fb3-921e-9b2671776618" (UID: "58b6d24b-00d3-4fb3-921e-9b2671776618"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.731994 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-config" (OuterVolumeSpecName: "config") pod "58b6d24b-00d3-4fb3-921e-9b2671776618" (UID: "58b6d24b-00d3-4fb3-921e-9b2671776618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749234 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61fd86b9-312a-4054-a02c-6c27e4909035-operator-scripts\") pod \"neutron-db-create-2n9r8\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749374 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxcw\" (UniqueName: \"kubernetes.io/projected/61fd86b9-312a-4054-a02c-6c27e4909035-kube-api-access-cnxcw\") pod \"neutron-db-create-2n9r8\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749426 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67r8j\" (UniqueName: \"kubernetes.io/projected/dd952bf2-34ba-40e4-af51-2fde882bb593-kube-api-access-67r8j\") pod \"neutron-5e45-account-create-update-6dnpd\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749454 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd952bf2-34ba-40e4-af51-2fde882bb593-operator-scripts\") pod \"neutron-5e45-account-create-update-6dnpd\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749542 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqjrf\" (UniqueName: \"kubernetes.io/projected/58b6d24b-00d3-4fb3-921e-9b2671776618-kube-api-access-jqjrf\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749560 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749571 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749584 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.749595 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/58b6d24b-00d3-4fb3-921e-9b2671776618-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.754049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd952bf2-34ba-40e4-af51-2fde882bb593-operator-scripts\") pod \"neutron-5e45-account-create-update-6dnpd\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.770272 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67r8j\" (UniqueName: \"kubernetes.io/projected/dd952bf2-34ba-40e4-af51-2fde882bb593-kube-api-access-67r8j\") pod \"neutron-5e45-account-create-update-6dnpd\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.784602 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.798368 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.823651 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.846405 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.851487 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61fd86b9-312a-4054-a02c-6c27e4909035-operator-scripts\") pod \"neutron-db-create-2n9r8\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.851665 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnxcw\" (UniqueName: \"kubernetes.io/projected/61fd86b9-312a-4054-a02c-6c27e4909035-kube-api-access-cnxcw\") pod \"neutron-db-create-2n9r8\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.852526 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61fd86b9-312a-4054-a02c-6c27e4909035-operator-scripts\") pod \"neutron-db-create-2n9r8\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.870522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnxcw\" (UniqueName: \"kubernetes.io/projected/61fd86b9-312a-4054-a02c-6c27e4909035-kube-api-access-cnxcw\") pod \"neutron-db-create-2n9r8\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:18 crc kubenswrapper[4835]: I0129 15:55:18.941990 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:19 crc kubenswrapper[4835]: E0129 15:55:19.061215 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Jan 29 15:55:19 crc kubenswrapper[4835]: E0129 15:55:19.061410 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtsgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gnnnq_openstack(0830b0ce-038c-41ff-bed9-e4efa49ae35f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:55:19 crc kubenswrapper[4835]: E0129 15:55:19.062678 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-gnnnq" podUID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.069670 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-d9gpw"] Jan 29 15:55:19 crc kubenswrapper[4835]: W0129 15:55:19.069805 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9f8e0f7_ce6b_4ffb_b0ee_f7f2654f1597.slice/crio-0d50e586f405c6e2c3767a5e6a22e8ebd3405c10a8c9eb9e9a6acbcd8ba20f3f WatchSource:0}: Error finding container 0d50e586f405c6e2c3767a5e6a22e8ebd3405c10a8c9eb9e9a6acbcd8ba20f3f: Status 404 returned error can't find the container with id 0d50e586f405c6e2c3767a5e6a22e8ebd3405c10a8c9eb9e9a6acbcd8ba20f3f Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.176753 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-919a-account-create-update-gk65v"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.329685 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-919a-account-create-update-gk65v" event={"ID":"d90603cb-bccf-4838-b077-7d1fe9da2c34","Type":"ContainerStarted","Data":"091ebb819a2c7ce0b9829be3e922ab3103a195b5e6eb961acfbdff28905da76e"} Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.332144 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dlwrd" event={"ID":"3c4899ae-0dd0-4a8d-aeb2-65738205364c","Type":"ContainerStarted","Data":"7d8975a8c98e17294ce3724285a6446a95de9bde7d078b39bc89663bc23e5ba2"} Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.343675 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" event={"ID":"58b6d24b-00d3-4fb3-921e-9b2671776618","Type":"ContainerDied","Data":"b257d581af23587e28cba2c0d53269ab8b5c87046fb8f9d3e85fdd7fa74356de"} Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.343727 4835 scope.go:117] "RemoveContainer" containerID="d3aa460fbb5daecf35c017919cb7b5ef5ad94eb778a851f73bfaba821cdf84b4" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.343865 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-vkwvp" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.345078 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ggv7m"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.349208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d9gpw" event={"ID":"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597","Type":"ContainerStarted","Data":"ebe0c4565fdfee27e2560d8636f68ace9e647fb92abff69869e72b260eefab0c"} Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.349239 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d9gpw" event={"ID":"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597","Type":"ContainerStarted","Data":"0d50e586f405c6e2c3767a5e6a22e8ebd3405c10a8c9eb9e9a6acbcd8ba20f3f"} Jan 29 15:55:19 crc kubenswrapper[4835]: E0129 15:55:19.351635 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-gnnnq" podUID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.352621 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dlwrd" podStartSLOduration=10.352603777 podStartE2EDuration="10.352603777s" podCreationTimestamp="2026-01-29 15:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:55:19.352139756 +0000 UTC m=+1621.545183410" watchObservedRunningTime="2026-01-29 15:55:19.352603777 +0000 UTC m=+1621.545647431" Jan 29 15:55:19 crc kubenswrapper[4835]: W0129 15:55:19.355141 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd9f8da0_19c5_4816_ae04_1981898172d7.slice/crio-4b1c2345fb63863ead5792bcba53d2d64b57b4bd6b51fbca38d366cf8224b224 WatchSource:0}: Error finding container 4b1c2345fb63863ead5792bcba53d2d64b57b4bd6b51fbca38d366cf8224b224: Status 404 returned error can't find the container with id 4b1c2345fb63863ead5792bcba53d2d64b57b4bd6b51fbca38d366cf8224b224 Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.388086 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-d9gpw" podStartSLOduration=2.388068225 podStartE2EDuration="2.388068225s" podCreationTimestamp="2026-01-29 15:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:55:19.37865976 +0000 UTC m=+1621.571703414" watchObservedRunningTime="2026-01-29 15:55:19.388068225 +0000 UTC m=+1621.581111879" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.436705 4835 scope.go:117] "RemoveContainer" containerID="4e26dcc70d3859384de4d1f2e60bb8c3f74f3e5b77b0f88815de6a1ab09bb8ac" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.467344 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2n9r8"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.475490 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7580-account-create-update-thn8w"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.481444 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-92dgb"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.488045 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5e45-account-create-update-6dnpd"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.494851 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkwvp"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.502033 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-vkwvp"] Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.514842 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 15:55:19 crc kubenswrapper[4835]: I0129 15:55:19.515015 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.357533 4835 generic.go:334] "Generic (PLEG): container finished" podID="fd9f8da0-19c5-4816-ae04-1981898172d7" containerID="5b824f835f44b904172c32ef10bf8ae6308d9fe5e2cf55462bad7bab8275707e" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.357582 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggv7m" event={"ID":"fd9f8da0-19c5-4816-ae04-1981898172d7","Type":"ContainerDied","Data":"5b824f835f44b904172c32ef10bf8ae6308d9fe5e2cf55462bad7bab8275707e"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.357907 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggv7m" event={"ID":"fd9f8da0-19c5-4816-ae04-1981898172d7","Type":"ContainerStarted","Data":"4b1c2345fb63863ead5792bcba53d2d64b57b4bd6b51fbca38d366cf8224b224"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.366726 4835 generic.go:334] "Generic (PLEG): container finished" podID="dd952bf2-34ba-40e4-af51-2fde882bb593" containerID="4a49019f8701fc1b8ccdd0887728f4002467e51649b66e329951a91f49ff49fb" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.366792 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e45-account-create-update-6dnpd" event={"ID":"dd952bf2-34ba-40e4-af51-2fde882bb593","Type":"ContainerDied","Data":"4a49019f8701fc1b8ccdd0887728f4002467e51649b66e329951a91f49ff49fb"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.366820 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e45-account-create-update-6dnpd" event={"ID":"dd952bf2-34ba-40e4-af51-2fde882bb593","Type":"ContainerStarted","Data":"6d7e4227fc3387396cf9121d1ac5c6629386877f588a618fc8861944d4139dd8"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.369884 4835 generic.go:334] "Generic (PLEG): container finished" podID="c73e7878-399e-4442-9e36-a9653b4ed8c0" containerID="17e650317719ce83a02693de52e93c1a1fc2141429b8b576788e7372d5c0ce1c" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.370074 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7580-account-create-update-thn8w" event={"ID":"c73e7878-399e-4442-9e36-a9653b4ed8c0","Type":"ContainerDied","Data":"17e650317719ce83a02693de52e93c1a1fc2141429b8b576788e7372d5c0ce1c"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.370153 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7580-account-create-update-thn8w" event={"ID":"c73e7878-399e-4442-9e36-a9653b4ed8c0","Type":"ContainerStarted","Data":"ffb83b4454246ac5b79053e732160b1cb11a9752c0be5cc7fbf2813558442d5d"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.372017 4835 generic.go:334] "Generic (PLEG): container finished" podID="d90603cb-bccf-4838-b077-7d1fe9da2c34" containerID="7eb18ea653f478a81bed97600ff9f912bbdefc0f8d27ac1b2ef97fd4acd10fe6" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.372104 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-919a-account-create-update-gk65v" event={"ID":"d90603cb-bccf-4838-b077-7d1fe9da2c34","Type":"ContainerDied","Data":"7eb18ea653f478a81bed97600ff9f912bbdefc0f8d27ac1b2ef97fd4acd10fe6"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.374186 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c4899ae-0dd0-4a8d-aeb2-65738205364c" containerID="7d8975a8c98e17294ce3724285a6446a95de9bde7d078b39bc89663bc23e5ba2" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.374245 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dlwrd" event={"ID":"3c4899ae-0dd0-4a8d-aeb2-65738205364c","Type":"ContainerDied","Data":"7d8975a8c98e17294ce3724285a6446a95de9bde7d078b39bc89663bc23e5ba2"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.375825 4835 generic.go:334] "Generic (PLEG): container finished" podID="61fd86b9-312a-4054-a02c-6c27e4909035" containerID="193333d858401d060c1e4aad514d17075124f018f66f8e855a868c9ff97dddf0" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.375924 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n9r8" event={"ID":"61fd86b9-312a-4054-a02c-6c27e4909035","Type":"ContainerDied","Data":"193333d858401d060c1e4aad514d17075124f018f66f8e855a868c9ff97dddf0"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.375953 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n9r8" event={"ID":"61fd86b9-312a-4054-a02c-6c27e4909035","Type":"ContainerStarted","Data":"d628f05cc948898eea601a51bca65461756b449e13fbb387017e7517221c468c"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.376976 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-92dgb" event={"ID":"ff472c74-7674-4064-a6f3-8a1cd21f8d9a","Type":"ContainerStarted","Data":"1d41beba2608c52b32a288a514418edd4ca50e34c128ee197bae870ed3c860d6"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.378194 4835 generic.go:334] "Generic (PLEG): container finished" podID="f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" containerID="ebe0c4565fdfee27e2560d8636f68ace9e647fb92abff69869e72b260eefab0c" exitCode=0 Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.378225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d9gpw" event={"ID":"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597","Type":"ContainerDied","Data":"ebe0c4565fdfee27e2560d8636f68ace9e647fb92abff69869e72b260eefab0c"} Jan 29 15:55:20 crc kubenswrapper[4835]: I0129 15:55:20.502983 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58b6d24b-00d3-4fb3-921e-9b2671776618" path="/var/lib/kubelet/pods/58b6d24b-00d3-4fb3-921e-9b2671776618/volumes" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.365817 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.426081 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-919a-account-create-update-gk65v" event={"ID":"d90603cb-bccf-4838-b077-7d1fe9da2c34","Type":"ContainerDied","Data":"091ebb819a2c7ce0b9829be3e922ab3103a195b5e6eb961acfbdff28905da76e"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.426122 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="091ebb819a2c7ce0b9829be3e922ab3103a195b5e6eb961acfbdff28905da76e" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.428234 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dlwrd" event={"ID":"3c4899ae-0dd0-4a8d-aeb2-65738205364c","Type":"ContainerDied","Data":"36302bf385dcddb89e66514e052620a7b9dd589ea5da6235352101cae5a20b2f"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.428257 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36302bf385dcddb89e66514e052620a7b9dd589ea5da6235352101cae5a20b2f" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.430215 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-d9gpw" event={"ID":"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597","Type":"ContainerDied","Data":"0d50e586f405c6e2c3767a5e6a22e8ebd3405c10a8c9eb9e9a6acbcd8ba20f3f"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.430254 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d50e586f405c6e2c3767a5e6a22e8ebd3405c10a8c9eb9e9a6acbcd8ba20f3f" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.431841 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-thn8w" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.431869 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7580-account-create-update-thn8w" event={"ID":"c73e7878-399e-4442-9e36-a9653b4ed8c0","Type":"ContainerDied","Data":"ffb83b4454246ac5b79053e732160b1cb11a9752c0be5cc7fbf2813558442d5d"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.431925 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffb83b4454246ac5b79053e732160b1cb11a9752c0be5cc7fbf2813558442d5d" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.433502 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2n9r8" event={"ID":"61fd86b9-312a-4054-a02c-6c27e4909035","Type":"ContainerDied","Data":"d628f05cc948898eea601a51bca65461756b449e13fbb387017e7517221c468c"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.433525 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d628f05cc948898eea601a51bca65461756b449e13fbb387017e7517221c468c" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.435238 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e45-account-create-update-6dnpd" event={"ID":"dd952bf2-34ba-40e4-af51-2fde882bb593","Type":"ContainerDied","Data":"6d7e4227fc3387396cf9121d1ac5c6629386877f588a618fc8861944d4139dd8"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.435283 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d7e4227fc3387396cf9121d1ac5c6629386877f588a618fc8861944d4139dd8" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.436617 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ggv7m" event={"ID":"fd9f8da0-19c5-4816-ae04-1981898172d7","Type":"ContainerDied","Data":"4b1c2345fb63863ead5792bcba53d2d64b57b4bd6b51fbca38d366cf8224b224"} Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.436641 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1c2345fb63863ead5792bcba53d2d64b57b4bd6b51fbca38d366cf8224b224" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.483535 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.537573 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.551773 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52smt\" (UniqueName: \"kubernetes.io/projected/c73e7878-399e-4442-9e36-a9653b4ed8c0-kube-api-access-52smt\") pod \"c73e7878-399e-4442-9e36-a9653b4ed8c0\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.551936 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73e7878-399e-4442-9e36-a9653b4ed8c0-operator-scripts\") pod \"c73e7878-399e-4442-9e36-a9653b4ed8c0\" (UID: \"c73e7878-399e-4442-9e36-a9653b4ed8c0\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.554119 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c73e7878-399e-4442-9e36-a9653b4ed8c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c73e7878-399e-4442-9e36-a9653b4ed8c0" (UID: "c73e7878-399e-4442-9e36-a9653b4ed8c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.560040 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73e7878-399e-4442-9e36-a9653b4ed8c0-kube-api-access-52smt" (OuterVolumeSpecName: "kube-api-access-52smt") pod "c73e7878-399e-4442-9e36-a9653b4ed8c0" (UID: "c73e7878-399e-4442-9e36-a9653b4ed8c0"). InnerVolumeSpecName "kube-api-access-52smt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.567740 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.569752 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.578724 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.586773 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.654590 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5hbk\" (UniqueName: \"kubernetes.io/projected/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-kube-api-access-h5hbk\") pod \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.654638 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dxg7\" (UniqueName: \"kubernetes.io/projected/fd9f8da0-19c5-4816-ae04-1981898172d7-kube-api-access-2dxg7\") pod \"fd9f8da0-19c5-4816-ae04-1981898172d7\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.654674 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-operator-scripts\") pod \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\" (UID: \"f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.654739 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd9f8da0-19c5-4816-ae04-1981898172d7-operator-scripts\") pod \"fd9f8da0-19c5-4816-ae04-1981898172d7\" (UID: \"fd9f8da0-19c5-4816-ae04-1981898172d7\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.655039 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52smt\" (UniqueName: \"kubernetes.io/projected/c73e7878-399e-4442-9e36-a9653b4ed8c0-kube-api-access-52smt\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.655055 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73e7878-399e-4442-9e36-a9653b4ed8c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.655802 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" (UID: "f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.655997 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9f8da0-19c5-4816-ae04-1981898172d7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd9f8da0-19c5-4816-ae04-1981898172d7" (UID: "fd9f8da0-19c5-4816-ae04-1981898172d7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.658944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9f8da0-19c5-4816-ae04-1981898172d7-kube-api-access-2dxg7" (OuterVolumeSpecName: "kube-api-access-2dxg7") pod "fd9f8da0-19c5-4816-ae04-1981898172d7" (UID: "fd9f8da0-19c5-4816-ae04-1981898172d7"). InnerVolumeSpecName "kube-api-access-2dxg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.659345 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-kube-api-access-h5hbk" (OuterVolumeSpecName: "kube-api-access-h5hbk") pod "f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" (UID: "f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597"). InnerVolumeSpecName "kube-api-access-h5hbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756002 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61fd86b9-312a-4054-a02c-6c27e4909035-operator-scripts\") pod \"61fd86b9-312a-4054-a02c-6c27e4909035\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756379 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fd86b9-312a-4054-a02c-6c27e4909035-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61fd86b9-312a-4054-a02c-6c27e4909035" (UID: "61fd86b9-312a-4054-a02c-6c27e4909035"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd952bf2-34ba-40e4-af51-2fde882bb593-operator-scripts\") pod \"dd952bf2-34ba-40e4-af51-2fde882bb593\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756446 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c4899ae-0dd0-4a8d-aeb2-65738205364c-operator-scripts\") pod \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756491 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnxcw\" (UniqueName: \"kubernetes.io/projected/61fd86b9-312a-4054-a02c-6c27e4909035-kube-api-access-cnxcw\") pod \"61fd86b9-312a-4054-a02c-6c27e4909035\" (UID: \"61fd86b9-312a-4054-a02c-6c27e4909035\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756566 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90603cb-bccf-4838-b077-7d1fe9da2c34-operator-scripts\") pod \"d90603cb-bccf-4838-b077-7d1fe9da2c34\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756609 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67r8j\" (UniqueName: \"kubernetes.io/projected/dd952bf2-34ba-40e4-af51-2fde882bb593-kube-api-access-67r8j\") pod \"dd952bf2-34ba-40e4-af51-2fde882bb593\" (UID: \"dd952bf2-34ba-40e4-af51-2fde882bb593\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756641 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8nkj\" (UniqueName: \"kubernetes.io/projected/3c4899ae-0dd0-4a8d-aeb2-65738205364c-kube-api-access-b8nkj\") pod \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\" (UID: \"3c4899ae-0dd0-4a8d-aeb2-65738205364c\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756667 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmnrg\" (UniqueName: \"kubernetes.io/projected/d90603cb-bccf-4838-b077-7d1fe9da2c34-kube-api-access-qmnrg\") pod \"d90603cb-bccf-4838-b077-7d1fe9da2c34\" (UID: \"d90603cb-bccf-4838-b077-7d1fe9da2c34\") " Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.756775 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd952bf2-34ba-40e4-af51-2fde882bb593-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd952bf2-34ba-40e4-af51-2fde882bb593" (UID: "dd952bf2-34ba-40e4-af51-2fde882bb593"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757053 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61fd86b9-312a-4054-a02c-6c27e4909035-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757068 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd952bf2-34ba-40e4-af51-2fde882bb593-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757081 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5hbk\" (UniqueName: \"kubernetes.io/projected/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-kube-api-access-h5hbk\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757095 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dxg7\" (UniqueName: \"kubernetes.io/projected/fd9f8da0-19c5-4816-ae04-1981898172d7-kube-api-access-2dxg7\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757108 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757109 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d90603cb-bccf-4838-b077-7d1fe9da2c34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d90603cb-bccf-4838-b077-7d1fe9da2c34" (UID: "d90603cb-bccf-4838-b077-7d1fe9da2c34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757120 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd9f8da0-19c5-4816-ae04-1981898172d7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.757479 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c4899ae-0dd0-4a8d-aeb2-65738205364c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c4899ae-0dd0-4a8d-aeb2-65738205364c" (UID: "3c4899ae-0dd0-4a8d-aeb2-65738205364c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.759763 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd952bf2-34ba-40e4-af51-2fde882bb593-kube-api-access-67r8j" (OuterVolumeSpecName: "kube-api-access-67r8j") pod "dd952bf2-34ba-40e4-af51-2fde882bb593" (UID: "dd952bf2-34ba-40e4-af51-2fde882bb593"). InnerVolumeSpecName "kube-api-access-67r8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.760984 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fd86b9-312a-4054-a02c-6c27e4909035-kube-api-access-cnxcw" (OuterVolumeSpecName: "kube-api-access-cnxcw") pod "61fd86b9-312a-4054-a02c-6c27e4909035" (UID: "61fd86b9-312a-4054-a02c-6c27e4909035"). InnerVolumeSpecName "kube-api-access-cnxcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.761538 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90603cb-bccf-4838-b077-7d1fe9da2c34-kube-api-access-qmnrg" (OuterVolumeSpecName: "kube-api-access-qmnrg") pod "d90603cb-bccf-4838-b077-7d1fe9da2c34" (UID: "d90603cb-bccf-4838-b077-7d1fe9da2c34"). InnerVolumeSpecName "kube-api-access-qmnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.761873 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4899ae-0dd0-4a8d-aeb2-65738205364c-kube-api-access-b8nkj" (OuterVolumeSpecName: "kube-api-access-b8nkj") pod "3c4899ae-0dd0-4a8d-aeb2-65738205364c" (UID: "3c4899ae-0dd0-4a8d-aeb2-65738205364c"). InnerVolumeSpecName "kube-api-access-b8nkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.863223 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c4899ae-0dd0-4a8d-aeb2-65738205364c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.863272 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnxcw\" (UniqueName: \"kubernetes.io/projected/61fd86b9-312a-4054-a02c-6c27e4909035-kube-api-access-cnxcw\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.863286 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90603cb-bccf-4838-b077-7d1fe9da2c34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.863296 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67r8j\" (UniqueName: \"kubernetes.io/projected/dd952bf2-34ba-40e4-af51-2fde882bb593-kube-api-access-67r8j\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.863305 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8nkj\" (UniqueName: \"kubernetes.io/projected/3c4899ae-0dd0-4a8d-aeb2-65738205364c-kube-api-access-b8nkj\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:24 crc kubenswrapper[4835]: I0129 15:55:24.863316 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmnrg\" (UniqueName: \"kubernetes.io/projected/d90603cb-bccf-4838-b077-7d1fe9da2c34-kube-api-access-qmnrg\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.445293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-92dgb" event={"ID":"ff472c74-7674-4064-a6f3-8a1cd21f8d9a","Type":"ContainerStarted","Data":"980166b426e17c4c47983854335d5b6f2473ef7574894cdff79499cf71d18b52"} Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.445310 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-gk65v" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.445335 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ggv7m" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.445364 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-6dnpd" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.445336 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dlwrd" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.446976 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-d9gpw" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.447194 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2n9r8" Jan 29 15:55:25 crc kubenswrapper[4835]: I0129 15:55:25.482806 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-92dgb" podStartSLOduration=2.6259541889999998 podStartE2EDuration="7.48278883s" podCreationTimestamp="2026-01-29 15:55:18 +0000 UTC" firstStartedPulling="2026-01-29 15:55:19.504662111 +0000 UTC m=+1621.697705765" lastFinishedPulling="2026-01-29 15:55:24.361496752 +0000 UTC m=+1626.554540406" observedRunningTime="2026-01-29 15:55:25.473881907 +0000 UTC m=+1627.666925561" watchObservedRunningTime="2026-01-29 15:55:25.48278883 +0000 UTC m=+1627.675832494" Jan 29 15:55:27 crc kubenswrapper[4835]: I0129 15:55:27.461631 4835 generic.go:334] "Generic (PLEG): container finished" podID="ff472c74-7674-4064-a6f3-8a1cd21f8d9a" containerID="980166b426e17c4c47983854335d5b6f2473ef7574894cdff79499cf71d18b52" exitCode=0 Jan 29 15:55:27 crc kubenswrapper[4835]: I0129 15:55:27.461707 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-92dgb" event={"ID":"ff472c74-7674-4064-a6f3-8a1cd21f8d9a","Type":"ContainerDied","Data":"980166b426e17c4c47983854335d5b6f2473ef7574894cdff79499cf71d18b52"} Jan 29 15:55:28 crc kubenswrapper[4835]: E0129 15:55:28.650913 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 15:55:28 crc kubenswrapper[4835]: E0129 15:55:28.651325 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mglp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hjvxh_openshift-marketplace(7c0649a1-77a6-4e5c-a6f1-d96480a91155): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:55:28 crc kubenswrapper[4835]: E0129 15:55:28.652727 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.788731 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.842096 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-combined-ca-bundle\") pod \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.842142 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2gqp\" (UniqueName: \"kubernetes.io/projected/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-kube-api-access-q2gqp\") pod \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.842221 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-config-data\") pod \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\" (UID: \"ff472c74-7674-4064-a6f3-8a1cd21f8d9a\") " Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.849193 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-kube-api-access-q2gqp" (OuterVolumeSpecName: "kube-api-access-q2gqp") pod "ff472c74-7674-4064-a6f3-8a1cd21f8d9a" (UID: "ff472c74-7674-4064-a6f3-8a1cd21f8d9a"). InnerVolumeSpecName "kube-api-access-q2gqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.879052 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff472c74-7674-4064-a6f3-8a1cd21f8d9a" (UID: "ff472c74-7674-4064-a6f3-8a1cd21f8d9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.903977 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-config-data" (OuterVolumeSpecName: "config-data") pod "ff472c74-7674-4064-a6f3-8a1cd21f8d9a" (UID: "ff472c74-7674-4064-a6f3-8a1cd21f8d9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.944241 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.944282 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2gqp\" (UniqueName: \"kubernetes.io/projected/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-kube-api-access-q2gqp\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:28 crc kubenswrapper[4835]: I0129 15:55:28.944301 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff472c74-7674-4064-a6f3-8a1cd21f8d9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.481176 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-92dgb" event={"ID":"ff472c74-7674-4064-a6f3-8a1cd21f8d9a","Type":"ContainerDied","Data":"1d41beba2608c52b32a288a514418edd4ca50e34c128ee197bae870ed3c860d6"} Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.481528 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d41beba2608c52b32a288a514418edd4ca50e34c128ee197bae870ed3c860d6" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.481437 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-92dgb" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.491692 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.493022 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753234 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77bbd879b9-sfqk2"] Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753657 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90603cb-bccf-4838-b077-7d1fe9da2c34" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753669 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90603cb-bccf-4838-b077-7d1fe9da2c34" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753690 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9f8da0-19c5-4816-ae04-1981898172d7" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753697 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9f8da0-19c5-4816-ae04-1981898172d7" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753707 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73e7878-399e-4442-9e36-a9653b4ed8c0" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753715 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73e7878-399e-4442-9e36-a9653b4ed8c0" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753729 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753736 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753750 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fd86b9-312a-4054-a02c-6c27e4909035" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753756 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fd86b9-312a-4054-a02c-6c27e4909035" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753766 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd952bf2-34ba-40e4-af51-2fde882bb593" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753773 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd952bf2-34ba-40e4-af51-2fde882bb593" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753779 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff472c74-7674-4064-a6f3-8a1cd21f8d9a" containerName="keystone-db-sync" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753784 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff472c74-7674-4064-a6f3-8a1cd21f8d9a" containerName="keystone-db-sync" Jan 29 15:55:29 crc kubenswrapper[4835]: E0129 15:55:29.753796 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4899ae-0dd0-4a8d-aeb2-65738205364c" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753802 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4899ae-0dd0-4a8d-aeb2-65738205364c" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753935 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4899ae-0dd0-4a8d-aeb2-65738205364c" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753954 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fd86b9-312a-4054-a02c-6c27e4909035" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753962 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9f8da0-19c5-4816-ae04-1981898172d7" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753969 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" containerName="mariadb-database-create" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753980 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff472c74-7674-4064-a6f3-8a1cd21f8d9a" containerName="keystone-db-sync" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753989 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd952bf2-34ba-40e4-af51-2fde882bb593" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.753996 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90603cb-bccf-4838-b077-7d1fe9da2c34" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.754002 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73e7878-399e-4442-9e36-a9653b4ed8c0" containerName="mariadb-account-create-update" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.754857 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.757917 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-svc\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.757965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzkj\" (UniqueName: \"kubernetes.io/projected/b217f852-82d3-409a-8efa-34ad5e647ceb-kube-api-access-4pzkj\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.757997 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-nb\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.758028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-config\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.758047 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-swift-storage-0\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.758098 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-sb\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.773400 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77bbd879b9-sfqk2"] Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.786141 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-b5dhd"] Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.787645 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.798366 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.799845 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.799902 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.799965 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.799855 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxxtt" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.824631 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5dhd"] Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859694 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-combined-ca-bundle\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-scripts\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859862 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-svc\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859894 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzkj\" (UniqueName: \"kubernetes.io/projected/b217f852-82d3-409a-8efa-34ad5e647ceb-kube-api-access-4pzkj\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859928 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-credential-keys\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859953 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-nb\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.859987 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-fernet-keys\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.860016 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-config\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.860043 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-swift-storage-0\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.860107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-config-data\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.860131 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhl6c\" (UniqueName: \"kubernetes.io/projected/0823eeb3-ca9b-41e0-966d-23aca00a0b38-kube-api-access-bhl6c\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.860156 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-sb\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.861069 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-sb\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.861710 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-swift-storage-0\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.861933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-config\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.862594 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-svc\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.863419 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-nb\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.884696 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzkj\" (UniqueName: \"kubernetes.io/projected/b217f852-82d3-409a-8efa-34ad5e647ceb-kube-api-access-4pzkj\") pod \"dnsmasq-dns-77bbd879b9-sfqk2\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.962667 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-config-data\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.962701 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhl6c\" (UniqueName: \"kubernetes.io/projected/0823eeb3-ca9b-41e0-966d-23aca00a0b38-kube-api-access-bhl6c\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.962729 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-combined-ca-bundle\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.962758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-scripts\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.962814 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-credential-keys\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.962845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-fernet-keys\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.966242 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-scripts\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.966369 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-credential-keys\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.966708 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-config-data\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.975159 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-fernet-keys\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.983849 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-combined-ca-bundle\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:29 crc kubenswrapper[4835]: I0129 15:55:29.992679 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhl6c\" (UniqueName: \"kubernetes.io/projected/0823eeb3-ca9b-41e0-966d-23aca00a0b38-kube-api-access-bhl6c\") pod \"keystone-bootstrap-b5dhd\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.010776 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.013117 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.016198 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.016938 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.043757 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.074692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.074796 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-scripts\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.074858 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsvw\" (UniqueName: \"kubernetes.io/projected/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-kube-api-access-qpsvw\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.074892 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-run-httpd\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.075021 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.075195 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-log-httpd\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.075367 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-config-data\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.076504 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.116423 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.183523 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pzf56"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185049 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185533 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185616 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-log-httpd\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185641 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-config-data\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185701 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-scripts\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185722 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-run-httpd\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.185737 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsvw\" (UniqueName: \"kubernetes.io/projected/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-kube-api-access-qpsvw\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.193340 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zs6vn" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.193616 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.206308 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-run-httpd\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.207560 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6wzhv"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.210834 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-log-httpd\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.211669 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.211671 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.212992 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.215955 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8nwk9" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.216253 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.227361 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.240752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-scripts\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.248651 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-config-data\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.251719 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kkwct"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.254365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsvw\" (UniqueName: \"kubernetes.io/projected/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-kube-api-access-qpsvw\") pod \"ceilometer-0\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.255212 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.263170 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gjvlt" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.263529 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.263716 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.281175 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6wzhv"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.288827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-combined-ca-bundle\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.289952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-config\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.289996 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twv8\" (UniqueName: \"kubernetes.io/projected/eea32b2d-5036-4574-8144-a3e3dd02fdf6-kube-api-access-5twv8\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290037 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-scripts\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290068 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-db-sync-config-data\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290109 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4xc5\" (UniqueName: \"kubernetes.io/projected/31877ce4-9de7-4a70-a383-ea7143b6b61d-kube-api-access-w4xc5\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290145 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea32b2d-5036-4574-8144-a3e3dd02fdf6-etc-machine-id\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-combined-ca-bundle\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290216 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-combined-ca-bundle\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290232 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-db-sync-config-data\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290269 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wcw\" (UniqueName: \"kubernetes.io/projected/be114b44-2cf1-4af2-93ae-05b8e56c83c9-kube-api-access-f9wcw\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.290384 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-config-data\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.300007 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pzf56"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.311040 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kkwct"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.331096 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bbd879b9-sfqk2"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.353579 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-22ql4"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.354765 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.356727 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2xvpx" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.356736 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.369315 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.388714 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-22ql4"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.393148 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a23091d-55d5-469e-9428-34435286ef9c-logs\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.393397 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-combined-ca-bundle\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394204 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-config\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394288 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5twv8\" (UniqueName: \"kubernetes.io/projected/eea32b2d-5036-4574-8144-a3e3dd02fdf6-kube-api-access-5twv8\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394340 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58f7b\" (UniqueName: \"kubernetes.io/projected/9a23091d-55d5-469e-9428-34435286ef9c-kube-api-access-58f7b\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394400 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-scripts\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394438 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-db-sync-config-data\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394593 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4xc5\" (UniqueName: \"kubernetes.io/projected/31877ce4-9de7-4a70-a383-ea7143b6b61d-kube-api-access-w4xc5\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394653 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-combined-ca-bundle\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394896 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea32b2d-5036-4574-8144-a3e3dd02fdf6-etc-machine-id\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.394953 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-combined-ca-bundle\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.395020 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-combined-ca-bundle\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.395048 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-db-sync-config-data\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.395149 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wcw\" (UniqueName: \"kubernetes.io/projected/be114b44-2cf1-4af2-93ae-05b8e56c83c9-kube-api-access-f9wcw\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.395298 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-config-data\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.395376 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-scripts\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.395582 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-config-data\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.398160 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea32b2d-5036-4574-8144-a3e3dd02fdf6-etc-machine-id\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.401524 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-db-sync-config-data\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.401686 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-config\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.401870 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-db-sync-config-data\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.403229 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-config-data\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.403724 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.404404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-scripts\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.404451 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8495b76777-n56gd"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.405596 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-combined-ca-bundle\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.406107 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.410070 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-combined-ca-bundle\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.414155 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-combined-ca-bundle\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.414308 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5twv8\" (UniqueName: \"kubernetes.io/projected/eea32b2d-5036-4574-8144-a3e3dd02fdf6-kube-api-access-5twv8\") pod \"cinder-db-sync-kkwct\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.416488 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4xc5\" (UniqueName: \"kubernetes.io/projected/31877ce4-9de7-4a70-a383-ea7143b6b61d-kube-api-access-w4xc5\") pod \"barbican-db-sync-pzf56\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.418894 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wcw\" (UniqueName: \"kubernetes.io/projected/be114b44-2cf1-4af2-93ae-05b8e56c83c9-kube-api-access-f9wcw\") pod \"neutron-db-sync-6wzhv\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.427391 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8495b76777-n56gd"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497340 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-svc\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497667 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-config\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497706 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-combined-ca-bundle\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497731 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdkdc\" (UniqueName: \"kubernetes.io/projected/c0f73761-fb6a-432f-b1af-740e344dd787-kube-api-access-hdkdc\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497794 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-sb\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497860 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-config-data\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497896 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-swift-storage-0\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497933 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-scripts\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.497981 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a23091d-55d5-469e-9428-34435286ef9c-logs\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.498022 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-nb\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.498040 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58f7b\" (UniqueName: \"kubernetes.io/projected/9a23091d-55d5-469e-9428-34435286ef9c-kube-api-access-58f7b\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.500384 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a23091d-55d5-469e-9428-34435286ef9c-logs\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.504624 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-combined-ca-bundle\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.505589 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-config-data\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.511655 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-scripts\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.517081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58f7b\" (UniqueName: \"kubernetes.io/projected/9a23091d-55d5-469e-9428-34435286ef9c-kube-api-access-58f7b\") pod \"placement-db-sync-22ql4\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.588116 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pzf56" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.596443 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6wzhv" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.598714 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-nb\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.598826 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-svc\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.598919 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-config\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.599022 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdkdc\" (UniqueName: \"kubernetes.io/projected/c0f73761-fb6a-432f-b1af-740e344dd787-kube-api-access-hdkdc\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.599139 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-sb\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.599275 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-swift-storage-0\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.600139 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-svc\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.600429 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-swift-storage-0\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.600513 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-config\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.600699 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-sb\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.601501 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-nb\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.606813 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kkwct" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.620404 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdkdc\" (UniqueName: \"kubernetes.io/projected/c0f73761-fb6a-432f-b1af-740e344dd787-kube-api-access-hdkdc\") pod \"dnsmasq-dns-8495b76777-n56gd\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.700302 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-22ql4" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.734144 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.771167 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bbd879b9-sfqk2"] Jan 29 15:55:30 crc kubenswrapper[4835]: I0129 15:55:30.791930 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-b5dhd"] Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.172045 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.265649 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pzf56"] Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.277227 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kkwct"] Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.312259 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6wzhv"] Jan 29 15:55:31 crc kubenswrapper[4835]: W0129 15:55:31.322890 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeea32b2d_5036_4574_8144_a3e3dd02fdf6.slice/crio-34677a53488303b13b064e37d11c15670faf0693a9fd3732d1d8b6e159b44db7 WatchSource:0}: Error finding container 34677a53488303b13b064e37d11c15670faf0693a9fd3732d1d8b6e159b44db7: Status 404 returned error can't find the container with id 34677a53488303b13b064e37d11c15670faf0693a9fd3732d1d8b6e159b44db7 Jan 29 15:55:31 crc kubenswrapper[4835]: W0129 15:55:31.323394 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe114b44_2cf1_4af2_93ae_05b8e56c83c9.slice/crio-9b3f9c08fe5b6e9d850e9c2a0afdeabe2a3f13446d6141cb2c9ae7b640ac7fbf WatchSource:0}: Error finding container 9b3f9c08fe5b6e9d850e9c2a0afdeabe2a3f13446d6141cb2c9ae7b640ac7fbf: Status 404 returned error can't find the container with id 9b3f9c08fe5b6e9d850e9c2a0afdeabe2a3f13446d6141cb2c9ae7b640ac7fbf Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.436446 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-22ql4"] Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.512121 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pzf56" event={"ID":"31877ce4-9de7-4a70-a383-ea7143b6b61d","Type":"ContainerStarted","Data":"76c6a4bf72ec068096350ff0efa4f4ff5d183fd87e021058730e6e80a2605b52"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.514763 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8495b76777-n56gd"] Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.520460 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kkwct" event={"ID":"eea32b2d-5036-4574-8144-a3e3dd02fdf6","Type":"ContainerStarted","Data":"34677a53488303b13b064e37d11c15670faf0693a9fd3732d1d8b6e159b44db7"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.523086 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-22ql4" event={"ID":"9a23091d-55d5-469e-9428-34435286ef9c","Type":"ContainerStarted","Data":"30b957666fea3aa1d6f9f7a8db74c5fb1b7878ccf4e3c32f9e39c98b8355b653"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.525026 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6wzhv" event={"ID":"be114b44-2cf1-4af2-93ae-05b8e56c83c9","Type":"ContainerStarted","Data":"9b3f9c08fe5b6e9d850e9c2a0afdeabe2a3f13446d6141cb2c9ae7b640ac7fbf"} Jan 29 15:55:31 crc kubenswrapper[4835]: W0129 15:55:31.526926 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0f73761_fb6a_432f_b1af_740e344dd787.slice/crio-bc145ddee211c85e6955c8ad0e37ca1800933effa646dce26cc1a814c9e36b99 WatchSource:0}: Error finding container bc145ddee211c85e6955c8ad0e37ca1800933effa646dce26cc1a814c9e36b99: Status 404 returned error can't find the container with id bc145ddee211c85e6955c8ad0e37ca1800933effa646dce26cc1a814c9e36b99 Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.527651 4835 generic.go:334] "Generic (PLEG): container finished" podID="b217f852-82d3-409a-8efa-34ad5e647ceb" containerID="349f3848d92dcf3bbd5715702596b0ef56d0f849e74e7954aba1f575975df899" exitCode=0 Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.527705 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" event={"ID":"b217f852-82d3-409a-8efa-34ad5e647ceb","Type":"ContainerDied","Data":"349f3848d92dcf3bbd5715702596b0ef56d0f849e74e7954aba1f575975df899"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.527730 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" event={"ID":"b217f852-82d3-409a-8efa-34ad5e647ceb","Type":"ContainerStarted","Data":"27ccdf009030b2edbaaa44852c9e02dd31d2422b7b6729c1d75523d608c21c0d"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.532626 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerStarted","Data":"8be080b081ec361843e0db12e169953eb314fb7836199af6fdc7eee5fcb9974e"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.534755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5dhd" event={"ID":"0823eeb3-ca9b-41e0-966d-23aca00a0b38","Type":"ContainerStarted","Data":"0f8ecdbdc7f45c75fa3ef0383d8fa264b1db898d64804f621587e9eb1a33135e"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.534786 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5dhd" event={"ID":"0823eeb3-ca9b-41e0-966d-23aca00a0b38","Type":"ContainerStarted","Data":"d6b2ca9b00019368d180170bee6be7ae605a8639ca89b30fdc7fc9292eba7d36"} Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.881742 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.916297 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-b5dhd" podStartSLOduration=2.916279202 podStartE2EDuration="2.916279202s" podCreationTimestamp="2026-01-29 15:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:55:31.569198237 +0000 UTC m=+1633.762241891" watchObservedRunningTime="2026-01-29 15:55:31.916279202 +0000 UTC m=+1634.109322856" Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.926881 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-sb\") pod \"b217f852-82d3-409a-8efa-34ad5e647ceb\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.926939 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pzkj\" (UniqueName: \"kubernetes.io/projected/b217f852-82d3-409a-8efa-34ad5e647ceb-kube-api-access-4pzkj\") pod \"b217f852-82d3-409a-8efa-34ad5e647ceb\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.927026 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-svc\") pod \"b217f852-82d3-409a-8efa-34ad5e647ceb\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.927081 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-config\") pod \"b217f852-82d3-409a-8efa-34ad5e647ceb\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.927113 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-swift-storage-0\") pod \"b217f852-82d3-409a-8efa-34ad5e647ceb\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.927196 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-nb\") pod \"b217f852-82d3-409a-8efa-34ad5e647ceb\" (UID: \"b217f852-82d3-409a-8efa-34ad5e647ceb\") " Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.951029 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b217f852-82d3-409a-8efa-34ad5e647ceb" (UID: "b217f852-82d3-409a-8efa-34ad5e647ceb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.953670 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b217f852-82d3-409a-8efa-34ad5e647ceb-kube-api-access-4pzkj" (OuterVolumeSpecName: "kube-api-access-4pzkj") pod "b217f852-82d3-409a-8efa-34ad5e647ceb" (UID: "b217f852-82d3-409a-8efa-34ad5e647ceb"). InnerVolumeSpecName "kube-api-access-4pzkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.984031 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b217f852-82d3-409a-8efa-34ad5e647ceb" (UID: "b217f852-82d3-409a-8efa-34ad5e647ceb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:31 crc kubenswrapper[4835]: I0129 15:55:31.989283 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b217f852-82d3-409a-8efa-34ad5e647ceb" (UID: "b217f852-82d3-409a-8efa-34ad5e647ceb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.002325 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b217f852-82d3-409a-8efa-34ad5e647ceb" (UID: "b217f852-82d3-409a-8efa-34ad5e647ceb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.003940 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-config" (OuterVolumeSpecName: "config") pod "b217f852-82d3-409a-8efa-34ad5e647ceb" (UID: "b217f852-82d3-409a-8efa-34ad5e647ceb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.029489 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.029536 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pzkj\" (UniqueName: \"kubernetes.io/projected/b217f852-82d3-409a-8efa-34ad5e647ceb-kube-api-access-4pzkj\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.029558 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.029571 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.029585 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.029596 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b217f852-82d3-409a-8efa-34ad5e647ceb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.548203 4835 generic.go:334] "Generic (PLEG): container finished" podID="c0f73761-fb6a-432f-b1af-740e344dd787" containerID="302cdb6ab279c1363cb013c59a10dedb3aaf537b566f16d25afe0add8c1ab43d" exitCode=0 Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.548288 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495b76777-n56gd" event={"ID":"c0f73761-fb6a-432f-b1af-740e344dd787","Type":"ContainerDied","Data":"302cdb6ab279c1363cb013c59a10dedb3aaf537b566f16d25afe0add8c1ab43d"} Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.548315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495b76777-n56gd" event={"ID":"c0f73761-fb6a-432f-b1af-740e344dd787","Type":"ContainerStarted","Data":"bc145ddee211c85e6955c8ad0e37ca1800933effa646dce26cc1a814c9e36b99"} Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.559218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6wzhv" event={"ID":"be114b44-2cf1-4af2-93ae-05b8e56c83c9","Type":"ContainerStarted","Data":"a4a8d1d6795a57017295b4d7231ffac10f0d8b6e3e695493171f62c381faf40f"} Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.563441 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" event={"ID":"b217f852-82d3-409a-8efa-34ad5e647ceb","Type":"ContainerDied","Data":"27ccdf009030b2edbaaa44852c9e02dd31d2422b7b6729c1d75523d608c21c0d"} Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.563516 4835 scope.go:117] "RemoveContainer" containerID="349f3848d92dcf3bbd5715702596b0ef56d0f849e74e7954aba1f575975df899" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.563614 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77bbd879b9-sfqk2" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.595072 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnnnq" event={"ID":"0830b0ce-038c-41ff-bed9-e4efa49ae35f","Type":"ContainerStarted","Data":"6b7329a1e06a9569e545697f0b7b3b8715cccd53175de5de3306fab6c063fed9"} Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.607111 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.801974 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gnnnq" podStartSLOduration=2.432558404 podStartE2EDuration="30.801946894s" podCreationTimestamp="2026-01-29 15:55:02 +0000 UTC" firstStartedPulling="2026-01-29 15:55:02.966586742 +0000 UTC m=+1605.159630396" lastFinishedPulling="2026-01-29 15:55:31.335975232 +0000 UTC m=+1633.529018886" observedRunningTime="2026-01-29 15:55:32.673045919 +0000 UTC m=+1634.866089593" watchObservedRunningTime="2026-01-29 15:55:32.801946894 +0000 UTC m=+1634.994990568" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.873668 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6wzhv" podStartSLOduration=2.8736454780000003 podStartE2EDuration="2.873645478s" podCreationTimestamp="2026-01-29 15:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:55:32.718802374 +0000 UTC m=+1634.911846028" watchObservedRunningTime="2026-01-29 15:55:32.873645478 +0000 UTC m=+1635.066689132" Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.904307 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77bbd879b9-sfqk2"] Jan 29 15:55:32 crc kubenswrapper[4835]: I0129 15:55:32.913247 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77bbd879b9-sfqk2"] Jan 29 15:55:33 crc kubenswrapper[4835]: I0129 15:55:33.633484 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495b76777-n56gd" event={"ID":"c0f73761-fb6a-432f-b1af-740e344dd787","Type":"ContainerStarted","Data":"41b19aa8d9c8ac559b5bc4737b481e23888428560ba89159943ec3ed83441a34"} Jan 29 15:55:33 crc kubenswrapper[4835]: I0129 15:55:33.634828 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:33 crc kubenswrapper[4835]: I0129 15:55:33.667205 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8495b76777-n56gd" podStartSLOduration=3.667183425 podStartE2EDuration="3.667183425s" podCreationTimestamp="2026-01-29 15:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:55:33.65857887 +0000 UTC m=+1635.851622524" watchObservedRunningTime="2026-01-29 15:55:33.667183425 +0000 UTC m=+1635.860227079" Jan 29 15:55:34 crc kubenswrapper[4835]: I0129 15:55:34.505761 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b217f852-82d3-409a-8efa-34ad5e647ceb" path="/var/lib/kubelet/pods/b217f852-82d3-409a-8efa-34ad5e647ceb/volumes" Jan 29 15:55:36 crc kubenswrapper[4835]: I0129 15:55:36.661512 4835 generic.go:334] "Generic (PLEG): container finished" podID="0823eeb3-ca9b-41e0-966d-23aca00a0b38" containerID="0f8ecdbdc7f45c75fa3ef0383d8fa264b1db898d64804f621587e9eb1a33135e" exitCode=0 Jan 29 15:55:36 crc kubenswrapper[4835]: I0129 15:55:36.661598 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5dhd" event={"ID":"0823eeb3-ca9b-41e0-966d-23aca00a0b38","Type":"ContainerDied","Data":"0f8ecdbdc7f45c75fa3ef0383d8fa264b1db898d64804f621587e9eb1a33135e"} Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.654767 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.699798 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-b5dhd" event={"ID":"0823eeb3-ca9b-41e0-966d-23aca00a0b38","Type":"ContainerDied","Data":"d6b2ca9b00019368d180170bee6be7ae605a8639ca89b30fdc7fc9292eba7d36"} Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.699839 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6b2ca9b00019368d180170bee6be7ae605a8639ca89b30fdc7fc9292eba7d36" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.699904 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-b5dhd" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.736104 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.796426 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bdffd66f-h8pdd"] Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.796674 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" containerID="cri-o://9af6db8002bb1c810886d469801671396039c3d9f5118f544aa527d7141d7806" gracePeriod=10 Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.803128 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-scripts\") pod \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.803173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-config-data\") pod \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.803218 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-fernet-keys\") pod \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.803239 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-combined-ca-bundle\") pod \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.803286 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-credential-keys\") pod \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.803303 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhl6c\" (UniqueName: \"kubernetes.io/projected/0823eeb3-ca9b-41e0-966d-23aca00a0b38-kube-api-access-bhl6c\") pod \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\" (UID: \"0823eeb3-ca9b-41e0-966d-23aca00a0b38\") " Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.815298 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0823eeb3-ca9b-41e0-966d-23aca00a0b38" (UID: "0823eeb3-ca9b-41e0-966d-23aca00a0b38"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.816619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0823eeb3-ca9b-41e0-966d-23aca00a0b38" (UID: "0823eeb3-ca9b-41e0-966d-23aca00a0b38"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.824737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0823eeb3-ca9b-41e0-966d-23aca00a0b38-kube-api-access-bhl6c" (OuterVolumeSpecName: "kube-api-access-bhl6c") pod "0823eeb3-ca9b-41e0-966d-23aca00a0b38" (UID: "0823eeb3-ca9b-41e0-966d-23aca00a0b38"). InnerVolumeSpecName "kube-api-access-bhl6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.838290 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-scripts" (OuterVolumeSpecName: "scripts") pod "0823eeb3-ca9b-41e0-966d-23aca00a0b38" (UID: "0823eeb3-ca9b-41e0-966d-23aca00a0b38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.845057 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0823eeb3-ca9b-41e0-966d-23aca00a0b38" (UID: "0823eeb3-ca9b-41e0-966d-23aca00a0b38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.847890 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-config-data" (OuterVolumeSpecName: "config-data") pod "0823eeb3-ca9b-41e0-966d-23aca00a0b38" (UID: "0823eeb3-ca9b-41e0-966d-23aca00a0b38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.905024 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.905550 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhl6c\" (UniqueName: \"kubernetes.io/projected/0823eeb3-ca9b-41e0-966d-23aca00a0b38-kube-api-access-bhl6c\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.905649 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.905868 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.905936 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:40 crc kubenswrapper[4835]: I0129 15:55:40.905992 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0823eeb3-ca9b-41e0-966d-23aca00a0b38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.712757 4835 generic.go:334] "Generic (PLEG): container finished" podID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerID="9af6db8002bb1c810886d469801671396039c3d9f5118f544aa527d7141d7806" exitCode=0 Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.712807 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" event={"ID":"072352c5-f0e2-4c07-8501-e118357bcd7e","Type":"ContainerDied","Data":"9af6db8002bb1c810886d469801671396039c3d9f5118f544aa527d7141d7806"} Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.749955 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-b5dhd"] Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.756914 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-b5dhd"] Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.825636 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zsxhl"] Jan 29 15:55:41 crc kubenswrapper[4835]: E0129 15:55:41.826226 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0823eeb3-ca9b-41e0-966d-23aca00a0b38" containerName="keystone-bootstrap" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.826334 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0823eeb3-ca9b-41e0-966d-23aca00a0b38" containerName="keystone-bootstrap" Jan 29 15:55:41 crc kubenswrapper[4835]: E0129 15:55:41.826424 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b217f852-82d3-409a-8efa-34ad5e647ceb" containerName="init" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.826527 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b217f852-82d3-409a-8efa-34ad5e647ceb" containerName="init" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.826824 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0823eeb3-ca9b-41e0-966d-23aca00a0b38" containerName="keystone-bootstrap" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.826912 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b217f852-82d3-409a-8efa-34ad5e647ceb" containerName="init" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.827533 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.831457 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.831727 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.831902 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.832518 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxxtt" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.834330 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.840662 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zsxhl"] Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.926875 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-fernet-keys\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.926939 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-scripts\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.926965 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-combined-ca-bundle\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.926997 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-credential-keys\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.927027 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-config-data\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:41 crc kubenswrapper[4835]: I0129 15:55:41.927082 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54ms\" (UniqueName: \"kubernetes.io/projected/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-kube-api-access-f54ms\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.028458 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-fernet-keys\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.028571 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-scripts\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.028597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-combined-ca-bundle\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.028631 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-credential-keys\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.028668 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-config-data\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.028724 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54ms\" (UniqueName: \"kubernetes.io/projected/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-kube-api-access-f54ms\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.033933 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-scripts\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.034661 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-combined-ca-bundle\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.034790 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-credential-keys\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.035728 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-config-data\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.040869 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-fernet-keys\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.052156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54ms\" (UniqueName: \"kubernetes.io/projected/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-kube-api-access-f54ms\") pod \"keystone-bootstrap-zsxhl\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.146068 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.492075 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:55:42 crc kubenswrapper[4835]: E0129 15:55:42.492290 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:55:42 crc kubenswrapper[4835]: I0129 15:55:42.501737 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0823eeb3-ca9b-41e0-966d-23aca00a0b38" path="/var/lib/kubelet/pods/0823eeb3-ca9b-41e0-966d-23aca00a0b38/volumes" Jan 29 15:55:43 crc kubenswrapper[4835]: E0129 15:55:43.497452 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:55:45 crc kubenswrapper[4835]: I0129 15:55:45.757583 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 29 15:55:50 crc kubenswrapper[4835]: I0129 15:55:50.757577 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 29 15:55:55 crc kubenswrapper[4835]: I0129 15:55:55.757229 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 29 15:55:55 crc kubenswrapper[4835]: I0129 15:55:55.758024 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:55:56 crc kubenswrapper[4835]: I0129 15:55:56.493678 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:55:56 crc kubenswrapper[4835]: E0129 15:55:56.495716 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:56:00 crc kubenswrapper[4835]: I0129 15:56:00.758084 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 29 15:56:05 crc kubenswrapper[4835]: I0129 15:56:05.758028 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 29 15:56:08 crc kubenswrapper[4835]: E0129 15:56:08.571751 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:56:10 crc kubenswrapper[4835]: I0129 15:56:10.492127 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:56:10 crc kubenswrapper[4835]: E0129 15:56:10.492853 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:56:10 crc kubenswrapper[4835]: I0129 15:56:10.760211 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 29 15:56:20 crc kubenswrapper[4835]: I0129 15:56:20.757131 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 29 15:56:22 crc kubenswrapper[4835]: E0129 15:56:22.105115 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Jan 29 15:56:22 crc kubenswrapper[4835]: E0129 15:56:22.105629 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4xc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pzf56_openstack(31877ce4-9de7-4a70-a383-ea7143b6b61d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:56:22 crc kubenswrapper[4835]: E0129 15:56:22.106820 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pzf56" podUID="31877ce4-9de7-4a70-a383-ea7143b6b61d" Jan 29 15:56:22 crc kubenswrapper[4835]: I0129 15:56:22.491699 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:56:22 crc kubenswrapper[4835]: E0129 15:56:22.491966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:56:23 crc kubenswrapper[4835]: E0129 15:56:23.121304 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-pzf56" podUID="31877ce4-9de7-4a70-a383-ea7143b6b61d" Jan 29 15:56:25 crc kubenswrapper[4835]: E0129 15:56:25.027566 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.140608 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" event={"ID":"072352c5-f0e2-4c07-8501-e118357bcd7e","Type":"ContainerDied","Data":"6bfe0417ebc6662d3caa2ae2f51a76ec0f574aeff969e673dac557a25108bbe1"} Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.140924 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bfe0417ebc6662d3caa2ae2f51a76ec0f574aeff969e673dac557a25108bbe1" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.342212 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.488119 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-swift-storage-0\") pod \"072352c5-f0e2-4c07-8501-e118357bcd7e\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.488560 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-svc\") pod \"072352c5-f0e2-4c07-8501-e118357bcd7e\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.488671 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-config\") pod \"072352c5-f0e2-4c07-8501-e118357bcd7e\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.488706 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-sb\") pod \"072352c5-f0e2-4c07-8501-e118357bcd7e\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.489140 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kncp\" (UniqueName: \"kubernetes.io/projected/072352c5-f0e2-4c07-8501-e118357bcd7e-kube-api-access-4kncp\") pod \"072352c5-f0e2-4c07-8501-e118357bcd7e\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.489266 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-nb\") pod \"072352c5-f0e2-4c07-8501-e118357bcd7e\" (UID: \"072352c5-f0e2-4c07-8501-e118357bcd7e\") " Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.505243 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zsxhl"] Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.513025 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/072352c5-f0e2-4c07-8501-e118357bcd7e-kube-api-access-4kncp" (OuterVolumeSpecName: "kube-api-access-4kncp") pod "072352c5-f0e2-4c07-8501-e118357bcd7e" (UID: "072352c5-f0e2-4c07-8501-e118357bcd7e"). InnerVolumeSpecName "kube-api-access-4kncp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.532490 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.546223 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "072352c5-f0e2-4c07-8501-e118357bcd7e" (UID: "072352c5-f0e2-4c07-8501-e118357bcd7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.552648 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "072352c5-f0e2-4c07-8501-e118357bcd7e" (UID: "072352c5-f0e2-4c07-8501-e118357bcd7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.565361 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "072352c5-f0e2-4c07-8501-e118357bcd7e" (UID: "072352c5-f0e2-4c07-8501-e118357bcd7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.566524 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-config" (OuterVolumeSpecName: "config") pod "072352c5-f0e2-4c07-8501-e118357bcd7e" (UID: "072352c5-f0e2-4c07-8501-e118357bcd7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.577805 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "072352c5-f0e2-4c07-8501-e118357bcd7e" (UID: "072352c5-f0e2-4c07-8501-e118357bcd7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.592928 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.592971 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kncp\" (UniqueName: \"kubernetes.io/projected/072352c5-f0e2-4c07-8501-e118357bcd7e-kube-api-access-4kncp\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.592986 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.592998 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.593010 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.593021 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/072352c5-f0e2-4c07-8501-e118357bcd7e-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:25 crc kubenswrapper[4835]: I0129 15:56:25.758277 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 29 15:56:26 crc kubenswrapper[4835]: I0129 15:56:26.148939 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zsxhl" event={"ID":"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c","Type":"ContainerStarted","Data":"d144f854f6bf8dff70898b1e90065e8d6d07beb6165f14474cfe2b942d68eec4"} Jan 29 15:56:26 crc kubenswrapper[4835]: I0129 15:56:26.148990 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bdffd66f-h8pdd" Jan 29 15:56:26 crc kubenswrapper[4835]: I0129 15:56:26.182801 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bdffd66f-h8pdd"] Jan 29 15:56:26 crc kubenswrapper[4835]: I0129 15:56:26.197232 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bdffd66f-h8pdd"] Jan 29 15:56:26 crc kubenswrapper[4835]: I0129 15:56:26.511227 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" path="/var/lib/kubelet/pods/072352c5-f0e2-4c07-8501-e118357bcd7e/volumes" Jan 29 15:56:27 crc kubenswrapper[4835]: I0129 15:56:27.173356 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-22ql4" event={"ID":"9a23091d-55d5-469e-9428-34435286ef9c","Type":"ContainerStarted","Data":"8eacf4bca6f1fc794f6ecde66779587ce925e62e92a694fa2830be73105d9356"} Jan 29 15:56:33 crc kubenswrapper[4835]: I0129 15:56:33.492222 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:56:33 crc kubenswrapper[4835]: E0129 15:56:33.493077 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:56:36 crc kubenswrapper[4835]: I0129 15:56:36.257593 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zsxhl" event={"ID":"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c","Type":"ContainerStarted","Data":"325712bfc8730f3f0f3a314d46167d7a286b943a428fc267faa019fc9f75760b"} Jan 29 15:56:37 crc kubenswrapper[4835]: I0129 15:56:37.266325 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerStarted","Data":"32c9bf6ebba3219ab6d92c10d5d50716276828dc0cef89a47bc6f0d773dea98a"} Jan 29 15:56:37 crc kubenswrapper[4835]: E0129 15:56:37.495159 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:56:38 crc kubenswrapper[4835]: I0129 15:56:38.304674 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-22ql4" podStartSLOduration=14.711167362 podStartE2EDuration="1m8.304652836s" podCreationTimestamp="2026-01-29 15:55:30 +0000 UTC" firstStartedPulling="2026-01-29 15:55:31.456722743 +0000 UTC m=+1633.649766387" lastFinishedPulling="2026-01-29 15:56:25.050208207 +0000 UTC m=+1687.243251861" observedRunningTime="2026-01-29 15:56:38.299663102 +0000 UTC m=+1700.492706766" watchObservedRunningTime="2026-01-29 15:56:38.304652836 +0000 UTC m=+1700.497696510" Jan 29 15:56:39 crc kubenswrapper[4835]: E0129 15:56:39.962380 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Jan 29 15:56:39 crc kubenswrapper[4835]: E0129 15:56:39.962972 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5twv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kkwct_openstack(eea32b2d-5036-4574-8144-a3e3dd02fdf6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:56:39 crc kubenswrapper[4835]: E0129 15:56:39.964227 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kkwct" podUID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" Jan 29 15:56:40 crc kubenswrapper[4835]: E0129 15:56:40.295176 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-kkwct" podUID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" Jan 29 15:56:40 crc kubenswrapper[4835]: I0129 15:56:40.335162 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zsxhl" podStartSLOduration=59.335143815 podStartE2EDuration="59.335143815s" podCreationTimestamp="2026-01-29 15:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:56:40.326503179 +0000 UTC m=+1702.519546843" watchObservedRunningTime="2026-01-29 15:56:40.335143815 +0000 UTC m=+1702.528187469" Jan 29 15:56:43 crc kubenswrapper[4835]: I0129 15:56:43.320769 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pzf56" event={"ID":"31877ce4-9de7-4a70-a383-ea7143b6b61d","Type":"ContainerStarted","Data":"9c477481de227c5703c9afbeee010830e8cbee3616601ec879c02c3fb4688764"} Jan 29 15:56:43 crc kubenswrapper[4835]: I0129 15:56:43.348803 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pzf56" podStartSLOduration=2.241346077 podStartE2EDuration="1m13.348779724s" podCreationTimestamp="2026-01-29 15:55:30 +0000 UTC" firstStartedPulling="2026-01-29 15:55:31.278517764 +0000 UTC m=+1633.471561428" lastFinishedPulling="2026-01-29 15:56:42.385951421 +0000 UTC m=+1704.578995075" observedRunningTime="2026-01-29 15:56:43.341126822 +0000 UTC m=+1705.534170486" watchObservedRunningTime="2026-01-29 15:56:43.348779724 +0000 UTC m=+1705.541823388" Jan 29 15:56:44 crc kubenswrapper[4835]: I0129 15:56:44.491796 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:56:44 crc kubenswrapper[4835]: E0129 15:56:44.492405 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:56:47 crc kubenswrapper[4835]: I0129 15:56:47.359666 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerStarted","Data":"6c7a5b88b92d2010764150e821844468f4ea05a24bae76f5a6101d5f81684a38"} Jan 29 15:56:50 crc kubenswrapper[4835]: I0129 15:56:50.386216 4835 generic.go:334] "Generic (PLEG): container finished" podID="77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" containerID="325712bfc8730f3f0f3a314d46167d7a286b943a428fc267faa019fc9f75760b" exitCode=0 Jan 29 15:56:50 crc kubenswrapper[4835]: I0129 15:56:50.386282 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zsxhl" event={"ID":"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c","Type":"ContainerDied","Data":"325712bfc8730f3f0f3a314d46167d7a286b943a428fc267faa019fc9f75760b"} Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.735451 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.772944 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-combined-ca-bundle\") pod \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.773398 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f54ms\" (UniqueName: \"kubernetes.io/projected/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-kube-api-access-f54ms\") pod \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.773448 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-config-data\") pod \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.773666 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-fernet-keys\") pod \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.773717 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-credential-keys\") pod \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.773746 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-scripts\") pod \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\" (UID: \"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c\") " Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.778987 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-scripts" (OuterVolumeSpecName: "scripts") pod "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" (UID: "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.779914 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" (UID: "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.780716 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" (UID: "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.782819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-kube-api-access-f54ms" (OuterVolumeSpecName: "kube-api-access-f54ms") pod "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" (UID: "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c"). InnerVolumeSpecName "kube-api-access-f54ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.799355 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-config-data" (OuterVolumeSpecName: "config-data") pod "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" (UID: "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.803738 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" (UID: "77b83e2e-c96d-4376-ad4d-c3765ca6ca5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.875748 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.875786 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.875855 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.875871 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.875885 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f54ms\" (UniqueName: \"kubernetes.io/projected/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-kube-api-access-f54ms\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:51 crc kubenswrapper[4835]: I0129 15:56:51.875897 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.402379 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zsxhl" event={"ID":"77b83e2e-c96d-4376-ad4d-c3765ca6ca5c","Type":"ContainerDied","Data":"d144f854f6bf8dff70898b1e90065e8d6d07beb6165f14474cfe2b942d68eec4"} Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.402424 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d144f854f6bf8dff70898b1e90065e8d6d07beb6165f14474cfe2b942d68eec4" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.402442 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zsxhl" Jan 29 15:56:52 crc kubenswrapper[4835]: E0129 15:56:52.469713 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77b83e2e_c96d_4376_ad4d_c3765ca6ca5c.slice/crio-d144f854f6bf8dff70898b1e90065e8d6d07beb6165f14474cfe2b942d68eec4\": RecentStats: unable to find data in memory cache]" Jan 29 15:56:52 crc kubenswrapper[4835]: E0129 15:56:52.499147 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.517840 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f9f4f755f-g8j9g"] Jan 29 15:56:52 crc kubenswrapper[4835]: E0129 15:56:52.521132 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.521162 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" Jan 29 15:56:52 crc kubenswrapper[4835]: E0129 15:56:52.521173 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" containerName="keystone-bootstrap" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.521179 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" containerName="keystone-bootstrap" Jan 29 15:56:52 crc kubenswrapper[4835]: E0129 15:56:52.521190 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="init" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.521197 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="init" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.521406 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" containerName="keystone-bootstrap" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.521423 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="072352c5-f0e2-4c07-8501-e118357bcd7e" containerName="dnsmasq-dns" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.522130 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.524643 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.525004 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.525158 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.525449 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-qxxtt" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.525614 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.525719 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.533847 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f9f4f755f-g8j9g"] Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.596699 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-internal-tls-certs\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.596748 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-fernet-keys\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.596790 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-public-tls-certs\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.596940 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-scripts\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.597151 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-combined-ca-bundle\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.597237 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-credential-keys\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.597262 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-config-data\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.597353 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bhb\" (UniqueName: \"kubernetes.io/projected/94d5afdb-a473-4923-a414-a7efb63d80cc-kube-api-access-b9bhb\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698749 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-combined-ca-bundle\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-credential-keys\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698837 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-config-data\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698870 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9bhb\" (UniqueName: \"kubernetes.io/projected/94d5afdb-a473-4923-a414-a7efb63d80cc-kube-api-access-b9bhb\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698914 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-internal-tls-certs\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698930 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-fernet-keys\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698952 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-public-tls-certs\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.698988 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-scripts\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.704484 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-internal-tls-certs\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.704687 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-scripts\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.704755 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-credential-keys\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.706200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-config-data\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.708221 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-combined-ca-bundle\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.709533 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-fernet-keys\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.712545 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-public-tls-certs\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.719944 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9bhb\" (UniqueName: \"kubernetes.io/projected/94d5afdb-a473-4923-a414-a7efb63d80cc-kube-api-access-b9bhb\") pod \"keystone-5f9f4f755f-g8j9g\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:52 crc kubenswrapper[4835]: I0129 15:56:52.853461 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:53 crc kubenswrapper[4835]: I0129 15:56:53.443001 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f9f4f755f-g8j9g"] Jan 29 15:56:54 crc kubenswrapper[4835]: I0129 15:56:54.418086 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f9f4f755f-g8j9g" event={"ID":"94d5afdb-a473-4923-a414-a7efb63d80cc","Type":"ContainerStarted","Data":"4951ed5c7191fae66a00ed13ac8412dceb569745f3e377377570992027b1281e"} Jan 29 15:56:55 crc kubenswrapper[4835]: I0129 15:56:55.427774 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f9f4f755f-g8j9g" event={"ID":"94d5afdb-a473-4923-a414-a7efb63d80cc","Type":"ContainerStarted","Data":"0b0c35e1ffcc8a8104e5ff789a24cd4d76e701e7faa753d0ffc606c3ac6de52a"} Jan 29 15:56:56 crc kubenswrapper[4835]: I0129 15:56:56.441332 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:56:56 crc kubenswrapper[4835]: I0129 15:56:56.468940 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f9f4f755f-g8j9g" podStartSLOduration=4.468923253 podStartE2EDuration="4.468923253s" podCreationTimestamp="2026-01-29 15:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:56:56.466203565 +0000 UTC m=+1718.659247219" watchObservedRunningTime="2026-01-29 15:56:56.468923253 +0000 UTC m=+1718.661966907" Jan 29 15:56:59 crc kubenswrapper[4835]: I0129 15:56:59.491937 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:56:59 crc kubenswrapper[4835]: E0129 15:56:59.492644 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:57:04 crc kubenswrapper[4835]: E0129 15:57:04.193712 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:57:04 crc kubenswrapper[4835]: I0129 15:57:04.673306 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 15:57:05 crc kubenswrapper[4835]: I0129 15:57:05.542919 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kkwct" event={"ID":"eea32b2d-5036-4574-8144-a3e3dd02fdf6","Type":"ContainerStarted","Data":"4c047d38e9f0fb7d24900c59d7d8a6bf6d2cdca2cca5301978c03cb19499c5bb"} Jan 29 15:57:05 crc kubenswrapper[4835]: I0129 15:57:05.546544 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerStarted","Data":"12a764476a3c24bc45e88649299c97bdafd7018f0a6e93b14180aca8b0711a8a"} Jan 29 15:57:05 crc kubenswrapper[4835]: I0129 15:57:05.572366 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kkwct" podStartSLOduration=2.64752496 podStartE2EDuration="1m35.572343095s" podCreationTimestamp="2026-01-29 15:55:30 +0000 UTC" firstStartedPulling="2026-01-29 15:55:31.334643718 +0000 UTC m=+1633.527687372" lastFinishedPulling="2026-01-29 15:57:04.259461833 +0000 UTC m=+1726.452505507" observedRunningTime="2026-01-29 15:57:05.562588581 +0000 UTC m=+1727.755632235" watchObservedRunningTime="2026-01-29 15:57:05.572343095 +0000 UTC m=+1727.765386749" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.492358 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:57:10 crc kubenswrapper[4835]: E0129 15:57:10.493102 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.674832 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pc9xw"] Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.676859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.684481 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pc9xw"] Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.840555 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-utilities\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.840675 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-catalog-content\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.840727 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnp2c\" (UniqueName: \"kubernetes.io/projected/796e145f-c075-485b-91a5-c088b964fd44-kube-api-access-lnp2c\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.943020 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnp2c\" (UniqueName: \"kubernetes.io/projected/796e145f-c075-485b-91a5-c088b964fd44-kube-api-access-lnp2c\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.943130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-utilities\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.943228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-catalog-content\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.943778 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-catalog-content\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.943804 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-utilities\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:10 crc kubenswrapper[4835]: I0129 15:57:10.978798 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnp2c\" (UniqueName: \"kubernetes.io/projected/796e145f-c075-485b-91a5-c088b964fd44-kube-api-access-lnp2c\") pod \"certified-operators-pc9xw\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:11 crc kubenswrapper[4835]: I0129 15:57:11.002628 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:18 crc kubenswrapper[4835]: E0129 15:57:18.112992 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:57:21 crc kubenswrapper[4835]: I0129 15:57:21.493120 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:57:21 crc kubenswrapper[4835]: E0129 15:57:21.494133 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:57:23 crc kubenswrapper[4835]: E0129 15:57:23.260545 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 15:57:23 crc kubenswrapper[4835]: E0129 15:57:23.261062 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpsvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(45de43a1-0e6c-4e5d-8ea2-196348bae1a9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 15:57:23 crc kubenswrapper[4835]: E0129 15:57:23.262697 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" Jan 29 15:57:23 crc kubenswrapper[4835]: I0129 15:57:23.501200 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pc9xw"] Jan 29 15:57:23 crc kubenswrapper[4835]: I0129 15:57:23.729531 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerStarted","Data":"a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f"} Jan 29 15:57:23 crc kubenswrapper[4835]: I0129 15:57:23.729578 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerStarted","Data":"59f628fce7d0de30d45d4beba0d09716c2a313c7ca4687d655226faf6922cad1"} Jan 29 15:57:23 crc kubenswrapper[4835]: I0129 15:57:23.729667 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-central-agent" containerID="cri-o://32c9bf6ebba3219ab6d92c10d5d50716276828dc0cef89a47bc6f0d773dea98a" gracePeriod=30 Jan 29 15:57:23 crc kubenswrapper[4835]: I0129 15:57:23.729708 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="sg-core" containerID="cri-o://12a764476a3c24bc45e88649299c97bdafd7018f0a6e93b14180aca8b0711a8a" gracePeriod=30 Jan 29 15:57:23 crc kubenswrapper[4835]: I0129 15:57:23.729727 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-notification-agent" containerID="cri-o://6c7a5b88b92d2010764150e821844468f4ea05a24bae76f5a6101d5f81684a38" gracePeriod=30 Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.526170 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.748686 4835 generic.go:334] "Generic (PLEG): container finished" podID="796e145f-c075-485b-91a5-c088b964fd44" containerID="a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f" exitCode=0 Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.748768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerDied","Data":"a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f"} Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.760912 4835 generic.go:334] "Generic (PLEG): container finished" podID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerID="12a764476a3c24bc45e88649299c97bdafd7018f0a6e93b14180aca8b0711a8a" exitCode=2 Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.760955 4835 generic.go:334] "Generic (PLEG): container finished" podID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerID="6c7a5b88b92d2010764150e821844468f4ea05a24bae76f5a6101d5f81684a38" exitCode=0 Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.760964 4835 generic.go:334] "Generic (PLEG): container finished" podID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerID="32c9bf6ebba3219ab6d92c10d5d50716276828dc0cef89a47bc6f0d773dea98a" exitCode=0 Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.760983 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerDied","Data":"12a764476a3c24bc45e88649299c97bdafd7018f0a6e93b14180aca8b0711a8a"} Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.761009 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerDied","Data":"6c7a5b88b92d2010764150e821844468f4ea05a24bae76f5a6101d5f81684a38"} Jan 29 15:57:24 crc kubenswrapper[4835]: I0129 15:57:24.761034 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerDied","Data":"32c9bf6ebba3219ab6d92c10d5d50716276828dc0cef89a47bc6f0d773dea98a"} Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.191693 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299236 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-sg-core-conf-yaml\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299325 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-run-httpd\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299375 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-combined-ca-bundle\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299424 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-log-httpd\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299450 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-config-data\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299514 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-scripts\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.299604 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsvw\" (UniqueName: \"kubernetes.io/projected/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-kube-api-access-qpsvw\") pod \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\" (UID: \"45de43a1-0e6c-4e5d-8ea2-196348bae1a9\") " Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.301831 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.302231 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.306170 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-scripts" (OuterVolumeSpecName: "scripts") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.307355 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-kube-api-access-qpsvw" (OuterVolumeSpecName: "kube-api-access-qpsvw") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "kube-api-access-qpsvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.324915 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.352813 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.361088 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-config-data" (OuterVolumeSpecName: "config-data") pod "45de43a1-0e6c-4e5d-8ea2-196348bae1a9" (UID: "45de43a1-0e6c-4e5d-8ea2-196348bae1a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401449 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsvw\" (UniqueName: \"kubernetes.io/projected/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-kube-api-access-qpsvw\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401503 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401515 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401527 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401538 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401548 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.401558 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45de43a1-0e6c-4e5d-8ea2-196348bae1a9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.771748 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"45de43a1-0e6c-4e5d-8ea2-196348bae1a9","Type":"ContainerDied","Data":"8be080b081ec361843e0db12e169953eb314fb7836199af6fdc7eee5fcb9974e"} Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.772706 4835 scope.go:117] "RemoveContainer" containerID="12a764476a3c24bc45e88649299c97bdafd7018f0a6e93b14180aca8b0711a8a" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.771811 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.794824 4835 scope.go:117] "RemoveContainer" containerID="6c7a5b88b92d2010764150e821844468f4ea05a24bae76f5a6101d5f81684a38" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.830444 4835 scope.go:117] "RemoveContainer" containerID="32c9bf6ebba3219ab6d92c10d5d50716276828dc0cef89a47bc6f0d773dea98a" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.842954 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.850031 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.869674 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:25 crc kubenswrapper[4835]: E0129 15:57:25.870096 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="sg-core" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.870116 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="sg-core" Jan 29 15:57:25 crc kubenswrapper[4835]: E0129 15:57:25.870129 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-notification-agent" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.870136 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-notification-agent" Jan 29 15:57:25 crc kubenswrapper[4835]: E0129 15:57:25.870154 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-central-agent" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.870161 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-central-agent" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.870305 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="sg-core" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.870323 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-central-agent" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.870338 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" containerName="ceilometer-notification-agent" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.873728 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.879071 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.884459 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:25 crc kubenswrapper[4835]: I0129 15:57:25.886361 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017331 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-scripts\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017380 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-log-httpd\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017430 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5zr\" (UniqueName: \"kubernetes.io/projected/afbfe172-94cc-440d-bf46-b7e3207a53a0-kube-api-access-9t5zr\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017486 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-config-data\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017553 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017582 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.017597 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-run-httpd\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.118727 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-config-data\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.118802 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.118836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.118849 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-run-httpd\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.118894 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-scripts\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.119586 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-log-httpd\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.119685 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t5zr\" (UniqueName: \"kubernetes.io/projected/afbfe172-94cc-440d-bf46-b7e3207a53a0-kube-api-access-9t5zr\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.119954 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-log-httpd\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.119991 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-run-httpd\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.123241 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.124495 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-config-data\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.129164 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-scripts\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.130398 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.163347 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t5zr\" (UniqueName: \"kubernetes.io/projected/afbfe172-94cc-440d-bf46-b7e3207a53a0-kube-api-access-9t5zr\") pod \"ceilometer-0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.194926 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.503454 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45de43a1-0e6c-4e5d-8ea2-196348bae1a9" path="/var/lib/kubelet/pods/45de43a1-0e6c-4e5d-8ea2-196348bae1a9/volumes" Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.622789 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:26 crc kubenswrapper[4835]: W0129 15:57:26.623327 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafbfe172_94cc_440d_bf46_b7e3207a53a0.slice/crio-1ac207a7eec002aab4a9463e5f8b915fe0a9c965ec7fa77739ad2518b7f0b72d WatchSource:0}: Error finding container 1ac207a7eec002aab4a9463e5f8b915fe0a9c965ec7fa77739ad2518b7f0b72d: Status 404 returned error can't find the container with id 1ac207a7eec002aab4a9463e5f8b915fe0a9c965ec7fa77739ad2518b7f0b72d Jan 29 15:57:26 crc kubenswrapper[4835]: I0129 15:57:26.780727 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerStarted","Data":"1ac207a7eec002aab4a9463e5f8b915fe0a9c965ec7fa77739ad2518b7f0b72d"} Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.690311 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.691941 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.696188 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.696339 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.696512 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d5gb2" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.700497 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.789577 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerStarted","Data":"7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f"} Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.847530 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.847645 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config-secret\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.847672 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.847908 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthrm\" (UniqueName: \"kubernetes.io/projected/95c9a919-e32a-49cd-b097-79fe61d7ec64-kube-api-access-bthrm\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.949059 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config-secret\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.949120 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.949176 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthrm\" (UniqueName: \"kubernetes.io/projected/95c9a919-e32a-49cd-b097-79fe61d7ec64-kube-api-access-bthrm\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.949228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.951081 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.954680 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config-secret\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.955246 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-combined-ca-bundle\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:27 crc kubenswrapper[4835]: I0129 15:57:27.978433 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthrm\" (UniqueName: \"kubernetes.io/projected/95c9a919-e32a-49cd-b097-79fe61d7ec64-kube-api-access-bthrm\") pod \"openstackclient\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " pod="openstack/openstackclient" Jan 29 15:57:28 crc kubenswrapper[4835]: I0129 15:57:28.050022 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 15:57:28 crc kubenswrapper[4835]: I0129 15:57:28.570041 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 15:57:28 crc kubenswrapper[4835]: I0129 15:57:28.801080 4835 generic.go:334] "Generic (PLEG): container finished" podID="796e145f-c075-485b-91a5-c088b964fd44" containerID="7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f" exitCode=0 Jan 29 15:57:28 crc kubenswrapper[4835]: I0129 15:57:28.801159 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerDied","Data":"7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f"} Jan 29 15:57:28 crc kubenswrapper[4835]: I0129 15:57:28.802909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"95c9a919-e32a-49cd-b097-79fe61d7ec64","Type":"ContainerStarted","Data":"1b1846db8ac8e0df3e31950252672f8a19a754f293db0f53b36dd96fad2b0412"} Jan 29 15:57:31 crc kubenswrapper[4835]: E0129 15:57:31.594152 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:57:32 crc kubenswrapper[4835]: I0129 15:57:32.849562 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerStarted","Data":"94b3e770c457a829a01ec383a30839c8e3de26a73e6f93cbb1452cf6a0d5dda6"} Jan 29 15:57:32 crc kubenswrapper[4835]: I0129 15:57:32.852756 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerStarted","Data":"6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031"} Jan 29 15:57:32 crc kubenswrapper[4835]: I0129 15:57:32.885291 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pc9xw" podStartSLOduration=15.615779194 podStartE2EDuration="22.885267542s" podCreationTimestamp="2026-01-29 15:57:10 +0000 UTC" firstStartedPulling="2026-01-29 15:57:24.752352677 +0000 UTC m=+1746.945396331" lastFinishedPulling="2026-01-29 15:57:32.021841015 +0000 UTC m=+1754.214884679" observedRunningTime="2026-01-29 15:57:32.877056267 +0000 UTC m=+1755.070099921" watchObservedRunningTime="2026-01-29 15:57:32.885267542 +0000 UTC m=+1755.078311206" Jan 29 15:57:33 crc kubenswrapper[4835]: I0129 15:57:33.647729 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:33 crc kubenswrapper[4835]: I0129 15:57:33.865702 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerStarted","Data":"ba8ba4fafa2c7c76b71046102417802b83d71ed4bd680acee4ac9d1ac1c798ca"} Jan 29 15:57:33 crc kubenswrapper[4835]: I0129 15:57:33.865762 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerStarted","Data":"dbcaf8bd84682ff8e8b1e29df651ef4c9d04e5bf7fc5794ff3f1016efeb46490"} Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.375436 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhf9x"] Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.378063 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.384619 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhf9x"] Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.491946 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:57:35 crc kubenswrapper[4835]: E0129 15:57:35.492226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.497940 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-utilities\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.498014 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-catalog-content\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.498298 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzhcv\" (UniqueName: \"kubernetes.io/projected/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-kube-api-access-pzhcv\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.600177 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-utilities\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.600259 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-catalog-content\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.600384 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzhcv\" (UniqueName: \"kubernetes.io/projected/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-kube-api-access-pzhcv\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.600736 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-utilities\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.601290 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-catalog-content\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.622672 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzhcv\" (UniqueName: \"kubernetes.io/projected/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-kube-api-access-pzhcv\") pod \"community-operators-zhf9x\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:35 crc kubenswrapper[4835]: I0129 15:57:35.700821 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.372125 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhf9x"] Jan 29 15:57:36 crc kubenswrapper[4835]: W0129 15:57:36.383516 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a1ed6cf_5cdd_4ccc_841b_56840df89d98.slice/crio-08bea58f294112030bf34464868e8a0483c2dfe3df616bf5730ce102d0d2638b WatchSource:0}: Error finding container 08bea58f294112030bf34464868e8a0483c2dfe3df616bf5730ce102d0d2638b: Status 404 returned error can't find the container with id 08bea58f294112030bf34464868e8a0483c2dfe3df616bf5730ce102d0d2638b Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.898628 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerStarted","Data":"08bea58f294112030bf34464868e8a0483c2dfe3df616bf5730ce102d0d2638b"} Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.901016 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerStarted","Data":"482ec36238c8a379890194d81a3836cfc8dd11eb8d83536fcf15d2424342971a"} Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.901200 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-central-agent" containerID="cri-o://94b3e770c457a829a01ec383a30839c8e3de26a73e6f93cbb1452cf6a0d5dda6" gracePeriod=30 Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.901523 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.901578 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="proxy-httpd" containerID="cri-o://482ec36238c8a379890194d81a3836cfc8dd11eb8d83536fcf15d2424342971a" gracePeriod=30 Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.901679 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-notification-agent" containerID="cri-o://dbcaf8bd84682ff8e8b1e29df651ef4c9d04e5bf7fc5794ff3f1016efeb46490" gracePeriod=30 Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.901730 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="sg-core" containerID="cri-o://ba8ba4fafa2c7c76b71046102417802b83d71ed4bd680acee4ac9d1ac1c798ca" gracePeriod=30 Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.907288 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a23091d-55d5-469e-9428-34435286ef9c" containerID="8eacf4bca6f1fc794f6ecde66779587ce925e62e92a694fa2830be73105d9356" exitCode=0 Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.907354 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-22ql4" event={"ID":"9a23091d-55d5-469e-9428-34435286ef9c","Type":"ContainerDied","Data":"8eacf4bca6f1fc794f6ecde66779587ce925e62e92a694fa2830be73105d9356"} Jan 29 15:57:36 crc kubenswrapper[4835]: I0129 15:57:36.964228 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8661563279999998 podStartE2EDuration="11.964206373s" podCreationTimestamp="2026-01-29 15:57:25 +0000 UTC" firstStartedPulling="2026-01-29 15:57:26.764324262 +0000 UTC m=+1748.957367916" lastFinishedPulling="2026-01-29 15:57:35.862374307 +0000 UTC m=+1758.055417961" observedRunningTime="2026-01-29 15:57:36.953869754 +0000 UTC m=+1759.146913408" watchObservedRunningTime="2026-01-29 15:57:36.964206373 +0000 UTC m=+1759.157250027" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.495677 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6677554fc9-4vpbh"] Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.509293 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.513556 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.514194 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.514451 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.547226 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6677554fc9-4vpbh"] Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.649754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-internal-tls-certs\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.649845 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-config-data\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.650214 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-combined-ca-bundle\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.650453 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tczpz\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-kube-api-access-tczpz\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.650669 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-public-tls-certs\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.650884 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-log-httpd\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.651011 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-run-httpd\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.651115 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-etc-swift\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752457 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-public-tls-certs\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752556 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tczpz\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-kube-api-access-tczpz\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752625 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-log-httpd\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752661 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-run-httpd\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752689 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-etc-swift\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-internal-tls-certs\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752750 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-config-data\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.752843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-combined-ca-bundle\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.753411 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-run-httpd\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.753789 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-log-httpd\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.762040 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-etc-swift\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.762898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-config-data\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.763226 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-combined-ca-bundle\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.767340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-public-tls-certs\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.771898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tczpz\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-kube-api-access-tczpz\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.773241 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-internal-tls-certs\") pod \"swift-proxy-6677554fc9-4vpbh\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.861155 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.924851 4835 generic.go:334] "Generic (PLEG): container finished" podID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerID="482ec36238c8a379890194d81a3836cfc8dd11eb8d83536fcf15d2424342971a" exitCode=0 Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.924878 4835 generic.go:334] "Generic (PLEG): container finished" podID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerID="ba8ba4fafa2c7c76b71046102417802b83d71ed4bd680acee4ac9d1ac1c798ca" exitCode=2 Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.924886 4835 generic.go:334] "Generic (PLEG): container finished" podID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerID="dbcaf8bd84682ff8e8b1e29df651ef4c9d04e5bf7fc5794ff3f1016efeb46490" exitCode=0 Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.924893 4835 generic.go:334] "Generic (PLEG): container finished" podID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerID="94b3e770c457a829a01ec383a30839c8e3de26a73e6f93cbb1452cf6a0d5dda6" exitCode=0 Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.925064 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerDied","Data":"482ec36238c8a379890194d81a3836cfc8dd11eb8d83536fcf15d2424342971a"} Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.925091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerDied","Data":"ba8ba4fafa2c7c76b71046102417802b83d71ed4bd680acee4ac9d1ac1c798ca"} Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.925106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerDied","Data":"dbcaf8bd84682ff8e8b1e29df651ef4c9d04e5bf7fc5794ff3f1016efeb46490"} Jan 29 15:57:37 crc kubenswrapper[4835]: I0129 15:57:37.925118 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerDied","Data":"94b3e770c457a829a01ec383a30839c8e3de26a73e6f93cbb1452cf6a0d5dda6"} Jan 29 15:57:41 crc kubenswrapper[4835]: I0129 15:57:41.003333 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:41 crc kubenswrapper[4835]: I0129 15:57:41.004724 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:57:42 crc kubenswrapper[4835]: I0129 15:57:42.052961 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" probeResult="failure" output=< Jan 29 15:57:42 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:57:42 crc kubenswrapper[4835]: > Jan 29 15:57:46 crc kubenswrapper[4835]: E0129 15:57:46.186171 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.522823 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-22ql4" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.615118 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-combined-ca-bundle\") pod \"9a23091d-55d5-469e-9428-34435286ef9c\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.615270 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-scripts\") pod \"9a23091d-55d5-469e-9428-34435286ef9c\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.615360 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a23091d-55d5-469e-9428-34435286ef9c-logs\") pod \"9a23091d-55d5-469e-9428-34435286ef9c\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.615420 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58f7b\" (UniqueName: \"kubernetes.io/projected/9a23091d-55d5-469e-9428-34435286ef9c-kube-api-access-58f7b\") pod \"9a23091d-55d5-469e-9428-34435286ef9c\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.615541 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-config-data\") pod \"9a23091d-55d5-469e-9428-34435286ef9c\" (UID: \"9a23091d-55d5-469e-9428-34435286ef9c\") " Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.616292 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a23091d-55d5-469e-9428-34435286ef9c-logs" (OuterVolumeSpecName: "logs") pod "9a23091d-55d5-469e-9428-34435286ef9c" (UID: "9a23091d-55d5-469e-9428-34435286ef9c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.621450 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-scripts" (OuterVolumeSpecName: "scripts") pod "9a23091d-55d5-469e-9428-34435286ef9c" (UID: "9a23091d-55d5-469e-9428-34435286ef9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.629931 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a23091d-55d5-469e-9428-34435286ef9c-kube-api-access-58f7b" (OuterVolumeSpecName: "kube-api-access-58f7b") pod "9a23091d-55d5-469e-9428-34435286ef9c" (UID: "9a23091d-55d5-469e-9428-34435286ef9c"). InnerVolumeSpecName "kube-api-access-58f7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.643193 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a23091d-55d5-469e-9428-34435286ef9c" (UID: "9a23091d-55d5-469e-9428-34435286ef9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.651242 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-config-data" (OuterVolumeSpecName: "config-data") pod "9a23091d-55d5-469e-9428-34435286ef9c" (UID: "9a23091d-55d5-469e-9428-34435286ef9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.717444 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.717517 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a23091d-55d5-469e-9428-34435286ef9c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.717588 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58f7b\" (UniqueName: \"kubernetes.io/projected/9a23091d-55d5-469e-9428-34435286ef9c-kube-api-access-58f7b\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.717603 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:46 crc kubenswrapper[4835]: I0129 15:57:46.717614 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a23091d-55d5-469e-9428-34435286ef9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.001572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-22ql4" event={"ID":"9a23091d-55d5-469e-9428-34435286ef9c","Type":"ContainerDied","Data":"30b957666fea3aa1d6f9f7a8db74c5fb1b7878ccf4e3c32f9e39c98b8355b653"} Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.001838 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b957666fea3aa1d6f9f7a8db74c5fb1b7878ccf4e3c32f9e39c98b8355b653" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.001623 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-22ql4" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.281618 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6677554fc9-4vpbh"] Jan 29 15:57:47 crc kubenswrapper[4835]: W0129 15:57:47.284600 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97f1c686_0356_4885_b6f2_b5b80279f19f.slice/crio-c948dd315663194976e258a741d9371a7e553e8f95b9b76f2d15e4e409f4a1f7 WatchSource:0}: Error finding container c948dd315663194976e258a741d9371a7e553e8f95b9b76f2d15e4e409f4a1f7: Status 404 returned error can't find the container with id c948dd315663194976e258a741d9371a7e553e8f95b9b76f2d15e4e409f4a1f7 Jan 29 15:57:47 crc kubenswrapper[4835]: E0129 15:57:47.465619 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944" Jan 29 15:57:47 crc kubenswrapper[4835]: E0129 15:57:47.466110 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bbh588hf6h567h68fh59bh64bh87hcbh56dh56dh4h67hc4h564h556h58bh589h54ch67fh5cch685h69h546hcch558h5b9h69h589h64dhf9h5cdq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bthrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(95c9a919-e32a-49cd-b097-79fe61d7ec64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:57:47 crc kubenswrapper[4835]: E0129 15:57:47.467629 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.631003 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bcd5f5d96-9c2ft"] Jan 29 15:57:47 crc kubenswrapper[4835]: E0129 15:57:47.631451 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a23091d-55d5-469e-9428-34435286ef9c" containerName="placement-db-sync" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.631491 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a23091d-55d5-469e-9428-34435286ef9c" containerName="placement-db-sync" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.631723 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a23091d-55d5-469e-9428-34435286ef9c" containerName="placement-db-sync" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.632809 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.636855 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.637133 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.637259 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2xvpx" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.637440 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.637592 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.645975 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bcd5f5d96-9c2ft"] Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.653994 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.734754 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-config-data\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.734819 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-scripts\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.734994 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t5zr\" (UniqueName: \"kubernetes.io/projected/afbfe172-94cc-440d-bf46-b7e3207a53a0-kube-api-access-9t5zr\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735038 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-run-httpd\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735095 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-combined-ca-bundle\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735127 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-sg-core-conf-yaml\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735161 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-log-httpd\") pod \"afbfe172-94cc-440d-bf46-b7e3207a53a0\" (UID: \"afbfe172-94cc-440d-bf46-b7e3207a53a0\") " Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735519 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-logs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735569 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-internal-tls-certs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735608 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-public-tls-certs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735632 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjb7b\" (UniqueName: \"kubernetes.io/projected/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-kube-api-access-qjb7b\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735732 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-config-data\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735784 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-combined-ca-bundle\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.735879 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-scripts\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.736724 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.736945 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.739239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbfe172-94cc-440d-bf46-b7e3207a53a0-kube-api-access-9t5zr" (OuterVolumeSpecName: "kube-api-access-9t5zr") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "kube-api-access-9t5zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.740656 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-scripts" (OuterVolumeSpecName: "scripts") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.765021 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.803944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.826456 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-config-data" (OuterVolumeSpecName: "config-data") pod "afbfe172-94cc-440d-bf46-b7e3207a53a0" (UID: "afbfe172-94cc-440d-bf46-b7e3207a53a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837630 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-logs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-internal-tls-certs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837694 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-public-tls-certs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjb7b\" (UniqueName: \"kubernetes.io/projected/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-kube-api-access-qjb7b\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-config-data\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837791 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-combined-ca-bundle\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838083 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-logs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.837875 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-scripts\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838307 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838325 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838338 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t5zr\" (UniqueName: \"kubernetes.io/projected/afbfe172-94cc-440d-bf46-b7e3207a53a0-kube-api-access-9t5zr\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838351 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838363 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838374 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afbfe172-94cc-440d-bf46-b7e3207a53a0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.838386 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afbfe172-94cc-440d-bf46-b7e3207a53a0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.841548 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-internal-tls-certs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.841746 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-public-tls-certs\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.841793 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-config-data\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.843000 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-scripts\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.843164 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-combined-ca-bundle\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.855443 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjb7b\" (UniqueName: \"kubernetes.io/projected/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-kube-api-access-qjb7b\") pod \"placement-bcd5f5d96-9c2ft\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:47 crc kubenswrapper[4835]: I0129 15:57:47.967778 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.012964 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerID="31a33426b9ef395dd17ef2e1de93164e2013fc9d0b9cd505ba4cfad5594f8cea" exitCode=0 Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.013037 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerDied","Data":"31a33426b9ef395dd17ef2e1de93164e2013fc9d0b9cd505ba4cfad5594f8cea"} Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.024174 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.025205 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afbfe172-94cc-440d-bf46-b7e3207a53a0","Type":"ContainerDied","Data":"1ac207a7eec002aab4a9463e5f8b915fe0a9c965ec7fa77739ad2518b7f0b72d"} Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.025326 4835 scope.go:117] "RemoveContainer" containerID="482ec36238c8a379890194d81a3836cfc8dd11eb8d83536fcf15d2424342971a" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.027445 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6677554fc9-4vpbh" event={"ID":"97f1c686-0356-4885-b6f2-b5b80279f19f","Type":"ContainerStarted","Data":"c948dd315663194976e258a741d9371a7e553e8f95b9b76f2d15e4e409f4a1f7"} Jan 29 15:57:48 crc kubenswrapper[4835]: E0129 15:57:48.033033 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944\\\"\"" pod="openstack/openstackclient" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.089799 4835 scope.go:117] "RemoveContainer" containerID="ba8ba4fafa2c7c76b71046102417802b83d71ed4bd680acee4ac9d1ac1c798ca" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.098349 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.112347 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.126135 4835 scope.go:117] "RemoveContainer" containerID="dbcaf8bd84682ff8e8b1e29df651ef4c9d04e5bf7fc5794ff3f1016efeb46490" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.137572 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:48 crc kubenswrapper[4835]: E0129 15:57:48.137983 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-notification-agent" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138002 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-notification-agent" Jan 29 15:57:48 crc kubenswrapper[4835]: E0129 15:57:48.138022 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="sg-core" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138029 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="sg-core" Jan 29 15:57:48 crc kubenswrapper[4835]: E0129 15:57:48.138046 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="proxy-httpd" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138053 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="proxy-httpd" Jan 29 15:57:48 crc kubenswrapper[4835]: E0129 15:57:48.138068 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-central-agent" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138076 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-central-agent" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138450 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-notification-agent" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138494 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="proxy-httpd" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138513 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="sg-core" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.138524 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" containerName="ceilometer-central-agent" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.140516 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.143946 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.144149 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.152705 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.173417 4835 scope.go:117] "RemoveContainer" containerID="94b3e770c457a829a01ec383a30839c8e3de26a73e6f93cbb1452cf6a0d5dda6" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.245339 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-log-httpd\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.245613 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-scripts\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.245910 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-config-data\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.245992 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-run-httpd\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.246282 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.246437 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xc9b\" (UniqueName: \"kubernetes.io/projected/be50479d-0914-4f6c-a9b7-57dd363a38df-kube-api-access-5xc9b\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.246590 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348498 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-scripts\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348580 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-config-data\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348601 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-run-httpd\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348640 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348673 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xc9b\" (UniqueName: \"kubernetes.io/projected/be50479d-0914-4f6c-a9b7-57dd363a38df-kube-api-access-5xc9b\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348709 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.348738 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-log-httpd\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.349903 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-log-httpd\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.352523 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-run-httpd\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.355223 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.356009 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.357112 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-scripts\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.358254 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-config-data\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.372606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xc9b\" (UniqueName: \"kubernetes.io/projected/be50479d-0914-4f6c-a9b7-57dd363a38df-kube-api-access-5xc9b\") pod \"ceilometer-0\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: W0129 15:57:48.459408 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaadbdc4_0e1d_4e7a_9f98_26a8ef8799b1.slice/crio-6666f16609d0c8d032d2b4ae64b760a59f3a7b05c51eda076f38150208590c6a WatchSource:0}: Error finding container 6666f16609d0c8d032d2b4ae64b760a59f3a7b05c51eda076f38150208590c6a: Status 404 returned error can't find the container with id 6666f16609d0c8d032d2b4ae64b760a59f3a7b05c51eda076f38150208590c6a Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.467990 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bcd5f5d96-9c2ft"] Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.474837 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.499794 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:57:48 crc kubenswrapper[4835]: E0129 15:57:48.500268 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.511927 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbfe172-94cc-440d-bf46-b7e3207a53a0" path="/var/lib/kubelet/pods/afbfe172-94cc-440d-bf46-b7e3207a53a0/volumes" Jan 29 15:57:48 crc kubenswrapper[4835]: I0129 15:57:48.708875 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:57:48 crc kubenswrapper[4835]: W0129 15:57:48.713576 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe50479d_0914_4f6c_a9b7_57dd363a38df.slice/crio-9460cef778bf50373b792c6aa60d2d20e33a5b49c8fb7d7f292cfac73371c2c1 WatchSource:0}: Error finding container 9460cef778bf50373b792c6aa60d2d20e33a5b49c8fb7d7f292cfac73371c2c1: Status 404 returned error can't find the container with id 9460cef778bf50373b792c6aa60d2d20e33a5b49c8fb7d7f292cfac73371c2c1 Jan 29 15:57:49 crc kubenswrapper[4835]: I0129 15:57:49.036660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcd5f5d96-9c2ft" event={"ID":"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1","Type":"ContainerStarted","Data":"6666f16609d0c8d032d2b4ae64b760a59f3a7b05c51eda076f38150208590c6a"} Jan 29 15:57:49 crc kubenswrapper[4835]: I0129 15:57:49.038133 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerStarted","Data":"9460cef778bf50373b792c6aa60d2d20e33a5b49c8fb7d7f292cfac73371c2c1"} Jan 29 15:57:49 crc kubenswrapper[4835]: I0129 15:57:49.041005 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6677554fc9-4vpbh" event={"ID":"97f1c686-0356-4885-b6f2-b5b80279f19f","Type":"ContainerStarted","Data":"8916bd9b088f2a42117f9ccac869fd3b5628b655f7f6665b375b874a888bc7d9"} Jan 29 15:57:50 crc kubenswrapper[4835]: I0129 15:57:50.051170 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcd5f5d96-9c2ft" event={"ID":"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1","Type":"ContainerStarted","Data":"c65cca305647c572c443a687b71194272f7144b26cee020f19edc85b55bf49d5"} Jan 29 15:57:50 crc kubenswrapper[4835]: I0129 15:57:50.053168 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6677554fc9-4vpbh" event={"ID":"97f1c686-0356-4885-b6f2-b5b80279f19f","Type":"ContainerStarted","Data":"d866f0be4bff596f0e208f6621bf6965e406522a230fe73cfee91a2420bda235"} Jan 29 15:57:50 crc kubenswrapper[4835]: I0129 15:57:50.053648 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:50 crc kubenswrapper[4835]: I0129 15:57:50.053743 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:50 crc kubenswrapper[4835]: I0129 15:57:50.089854 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6677554fc9-4vpbh" podStartSLOduration=13.089832408 podStartE2EDuration="13.089832408s" podCreationTimestamp="2026-01-29 15:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:57:50.089244223 +0000 UTC m=+1772.282287897" watchObservedRunningTime="2026-01-29 15:57:50.089832408 +0000 UTC m=+1772.282876062" Jan 29 15:57:51 crc kubenswrapper[4835]: I0129 15:57:51.063238 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcd5f5d96-9c2ft" event={"ID":"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1","Type":"ContainerStarted","Data":"b0f9698f122586483286669c9c68b041121959ba078f0f206adad9e397fd176f"} Jan 29 15:57:51 crc kubenswrapper[4835]: I0129 15:57:51.064763 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:51 crc kubenswrapper[4835]: I0129 15:57:51.064865 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:57:51 crc kubenswrapper[4835]: I0129 15:57:51.085487 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bcd5f5d96-9c2ft" podStartSLOduration=4.085443525 podStartE2EDuration="4.085443525s" podCreationTimestamp="2026-01-29 15:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:57:51.083987518 +0000 UTC m=+1773.277031182" watchObservedRunningTime="2026-01-29 15:57:51.085443525 +0000 UTC m=+1773.278487179" Jan 29 15:57:52 crc kubenswrapper[4835]: I0129 15:57:52.068989 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" probeResult="failure" output=< Jan 29 15:57:52 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:57:52 crc kubenswrapper[4835]: > Jan 29 15:57:56 crc kubenswrapper[4835]: I0129 15:57:56.117934 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerStarted","Data":"f621a7ee980abdadb9510785914c9b599b5e327689471982d67e9a158e626384"} Jan 29 15:57:56 crc kubenswrapper[4835]: I0129 15:57:56.120822 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerStarted","Data":"0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697"} Jan 29 15:57:57 crc kubenswrapper[4835]: I0129 15:57:57.128426 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerID="f621a7ee980abdadb9510785914c9b599b5e327689471982d67e9a158e626384" exitCode=0 Jan 29 15:57:57 crc kubenswrapper[4835]: I0129 15:57:57.128499 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerDied","Data":"f621a7ee980abdadb9510785914c9b599b5e327689471982d67e9a158e626384"} Jan 29 15:57:57 crc kubenswrapper[4835]: I0129 15:57:57.865686 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:57:57 crc kubenswrapper[4835]: I0129 15:57:57.867680 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 15:58:01 crc kubenswrapper[4835]: E0129 15:58:01.597108 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" Jan 29 15:58:02 crc kubenswrapper[4835]: I0129 15:58:02.047270 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:02 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:02 crc kubenswrapper[4835]: > Jan 29 15:58:02 crc kubenswrapper[4835]: I0129 15:58:02.492660 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:58:02 crc kubenswrapper[4835]: E0129 15:58:02.493226 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:58:05 crc kubenswrapper[4835]: I0129 15:58:05.195225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerStarted","Data":"79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37"} Jan 29 15:58:09 crc kubenswrapper[4835]: I0129 15:58:09.254437 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"95c9a919-e32a-49cd-b097-79fe61d7ec64","Type":"ContainerStarted","Data":"540959f4551d052c3cb5a4e4f683efeed182efa67d63708676587082e1b90021"} Jan 29 15:58:09 crc kubenswrapper[4835]: I0129 15:58:09.257567 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerStarted","Data":"27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087"} Jan 29 15:58:09 crc kubenswrapper[4835]: I0129 15:58:09.260500 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerStarted","Data":"cca6cb344c450ca84d89393f81ad8e25dbaa55092d4440fc873f5c140125b10d"} Jan 29 15:58:09 crc kubenswrapper[4835]: I0129 15:58:09.277132 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.846648428 podStartE2EDuration="42.277114224s" podCreationTimestamp="2026-01-29 15:57:27 +0000 UTC" firstStartedPulling="2026-01-29 15:57:28.571487882 +0000 UTC m=+1750.764531536" lastFinishedPulling="2026-01-29 15:58:08.001953678 +0000 UTC m=+1790.194997332" observedRunningTime="2026-01-29 15:58:09.27299047 +0000 UTC m=+1791.466034124" watchObservedRunningTime="2026-01-29 15:58:09.277114224 +0000 UTC m=+1791.470157878" Jan 29 15:58:09 crc kubenswrapper[4835]: I0129 15:58:09.311087 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhf9x" podStartSLOduration=14.36771051 podStartE2EDuration="34.311068854s" podCreationTimestamp="2026-01-29 15:57:35 +0000 UTC" firstStartedPulling="2026-01-29 15:57:48.015539196 +0000 UTC m=+1770.208582850" lastFinishedPulling="2026-01-29 15:58:07.95889754 +0000 UTC m=+1790.151941194" observedRunningTime="2026-01-29 15:58:09.305094584 +0000 UTC m=+1791.498138238" watchObservedRunningTime="2026-01-29 15:58:09.311068854 +0000 UTC m=+1791.504112508" Jan 29 15:58:10 crc kubenswrapper[4835]: I0129 15:58:10.271501 4835 generic.go:334] "Generic (PLEG): container finished" podID="31877ce4-9de7-4a70-a383-ea7143b6b61d" containerID="9c477481de227c5703c9afbeee010830e8cbee3616601ec879c02c3fb4688764" exitCode=0 Jan 29 15:58:10 crc kubenswrapper[4835]: I0129 15:58:10.271590 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pzf56" event={"ID":"31877ce4-9de7-4a70-a383-ea7143b6b61d","Type":"ContainerDied","Data":"9c477481de227c5703c9afbeee010830e8cbee3616601ec879c02c3fb4688764"} Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.281858 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerStarted","Data":"a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4"} Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.613969 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pzf56" Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.689215 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-combined-ca-bundle\") pod \"31877ce4-9de7-4a70-a383-ea7143b6b61d\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.689583 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xc5\" (UniqueName: \"kubernetes.io/projected/31877ce4-9de7-4a70-a383-ea7143b6b61d-kube-api-access-w4xc5\") pod \"31877ce4-9de7-4a70-a383-ea7143b6b61d\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.689663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-db-sync-config-data\") pod \"31877ce4-9de7-4a70-a383-ea7143b6b61d\" (UID: \"31877ce4-9de7-4a70-a383-ea7143b6b61d\") " Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.693933 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "31877ce4-9de7-4a70-a383-ea7143b6b61d" (UID: "31877ce4-9de7-4a70-a383-ea7143b6b61d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.696447 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31877ce4-9de7-4a70-a383-ea7143b6b61d-kube-api-access-w4xc5" (OuterVolumeSpecName: "kube-api-access-w4xc5") pod "31877ce4-9de7-4a70-a383-ea7143b6b61d" (UID: "31877ce4-9de7-4a70-a383-ea7143b6b61d"). InnerVolumeSpecName "kube-api-access-w4xc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.734645 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31877ce4-9de7-4a70-a383-ea7143b6b61d" (UID: "31877ce4-9de7-4a70-a383-ea7143b6b61d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.791452 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.791499 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31877ce4-9de7-4a70-a383-ea7143b6b61d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:11 crc kubenswrapper[4835]: I0129 15:58:11.791510 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xc5\" (UniqueName: \"kubernetes.io/projected/31877ce4-9de7-4a70-a383-ea7143b6b61d-kube-api-access-w4xc5\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.076227 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:12 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:12 crc kubenswrapper[4835]: > Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.291276 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pzf56" event={"ID":"31877ce4-9de7-4a70-a383-ea7143b6b61d","Type":"ContainerDied","Data":"76c6a4bf72ec068096350ff0efa4f4ff5d183fd87e021058730e6e80a2605b52"} Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.291328 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76c6a4bf72ec068096350ff0efa4f4ff5d183fd87e021058730e6e80a2605b52" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.291296 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pzf56" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.291565 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.318203 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.161060662 podStartE2EDuration="24.31818109s" podCreationTimestamp="2026-01-29 15:57:48 +0000 UTC" firstStartedPulling="2026-01-29 15:57:48.71586114 +0000 UTC m=+1770.908904794" lastFinishedPulling="2026-01-29 15:58:10.872981568 +0000 UTC m=+1793.066025222" observedRunningTime="2026-01-29 15:58:12.313886882 +0000 UTC m=+1794.506930546" watchObservedRunningTime="2026-01-29 15:58:12.31818109 +0000 UTC m=+1794.511224764" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.521897 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5f7ddf4db5-rxpvj"] Jan 29 15:58:12 crc kubenswrapper[4835]: E0129 15:58:12.522550 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31877ce4-9de7-4a70-a383-ea7143b6b61d" containerName="barbican-db-sync" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.522615 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="31877ce4-9de7-4a70-a383-ea7143b6b61d" containerName="barbican-db-sync" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.523043 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="31877ce4-9de7-4a70-a383-ea7143b6b61d" containerName="barbican-db-sync" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.524261 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.538161 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.538429 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zs6vn" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.538593 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.548544 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-748568c9b6-g5d2z"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.550068 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.555994 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.570446 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5f7ddf4db5-rxpvj"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.578765 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-748568c9b6-g5d2z"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.654238 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bf77c6df-5tkqh"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.656855 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.693919 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bf77c6df-5tkqh"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741703 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921df059-21ec-4ba7-9093-bbcc7c788991-logs\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741793 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741816 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6q5\" (UniqueName: \"kubernetes.io/projected/921df059-21ec-4ba7-9093-bbcc7c788991-kube-api-access-cf6q5\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-combined-ca-bundle\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741891 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data-custom\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741911 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4557w\" (UniqueName: \"kubernetes.io/projected/c6c94dad-4a93-4c81-b197-781087dc0a8c-kube-api-access-4557w\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741930 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data-custom\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741978 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-combined-ca-bundle\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.741994 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.742022 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c94dad-4a93-4c81-b197-781087dc0a8c-logs\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.754145 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-566b7cbc86-chzgb"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.756004 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.757855 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.760883 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-566b7cbc86-chzgb"] Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-combined-ca-bundle\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data-custom\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843456 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4557w\" (UniqueName: \"kubernetes.io/projected/c6c94dad-4a93-4c81-b197-781087dc0a8c-kube-api-access-4557w\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843497 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data-custom\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843543 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjrzs\" (UniqueName: \"kubernetes.io/projected/41e0e4fe-b506-404a-940a-5d10b99221b0-kube-api-access-wjrzs\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843569 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-swift-storage-0\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843609 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-combined-ca-bundle\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843730 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c94dad-4a93-4c81-b197-781087dc0a8c-logs\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843884 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-svc\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843917 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-nb\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.843952 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-sb\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.844107 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921df059-21ec-4ba7-9093-bbcc7c788991-logs\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.844219 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-config\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.844329 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c94dad-4a93-4c81-b197-781087dc0a8c-logs\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.844338 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.844429 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6q5\" (UniqueName: \"kubernetes.io/projected/921df059-21ec-4ba7-9093-bbcc7c788991-kube-api-access-cf6q5\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.844517 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921df059-21ec-4ba7-9093-bbcc7c788991-logs\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.849521 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-combined-ca-bundle\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.852108 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-combined-ca-bundle\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.855573 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.856945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.858340 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data-custom\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.861431 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6q5\" (UniqueName: \"kubernetes.io/projected/921df059-21ec-4ba7-9093-bbcc7c788991-kube-api-access-cf6q5\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.861459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4557w\" (UniqueName: \"kubernetes.io/projected/c6c94dad-4a93-4c81-b197-781087dc0a8c-kube-api-access-4557w\") pod \"barbican-keystone-listener-748568c9b6-g5d2z\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.861881 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data-custom\") pod \"barbican-worker-5f7ddf4db5-rxpvj\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.867962 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.888947 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.946898 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnnb\" (UniqueName: \"kubernetes.io/projected/a882d76f-a62a-480e-84f6-5fc1b766ab86-kube-api-access-spnnb\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947110 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-svc\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947153 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-nb\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947205 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-sb\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947239 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a882d76f-a62a-480e-84f6-5fc1b766ab86-logs\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947295 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-config\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947318 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-combined-ca-bundle\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947356 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947496 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjrzs\" (UniqueName: \"kubernetes.io/projected/41e0e4fe-b506-404a-940a-5d10b99221b0-kube-api-access-wjrzs\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947526 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-swift-storage-0\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.947574 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data-custom\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.948328 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-nb\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.948560 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-config\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.948757 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-swift-storage-0\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.948795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-svc\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.948898 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-sb\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.971253 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjrzs\" (UniqueName: \"kubernetes.io/projected/41e0e4fe-b506-404a-940a-5d10b99221b0-kube-api-access-wjrzs\") pod \"dnsmasq-dns-86bf77c6df-5tkqh\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:12 crc kubenswrapper[4835]: I0129 15:58:12.986985 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.048901 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a882d76f-a62a-480e-84f6-5fc1b766ab86-logs\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.049415 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.049442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-combined-ca-bundle\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.049593 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data-custom\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.049636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnnb\" (UniqueName: \"kubernetes.io/projected/a882d76f-a62a-480e-84f6-5fc1b766ab86-kube-api-access-spnnb\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.049363 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a882d76f-a62a-480e-84f6-5fc1b766ab86-logs\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.069136 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-combined-ca-bundle\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.069236 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data-custom\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.076341 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnnb\" (UniqueName: \"kubernetes.io/projected/a882d76f-a62a-480e-84f6-5fc1b766ab86-kube-api-access-spnnb\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.095650 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data\") pod \"barbican-api-566b7cbc86-chzgb\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.273051 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-748568c9b6-g5d2z"] Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.355330 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5f7ddf4db5-rxpvj"] Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.384844 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:13 crc kubenswrapper[4835]: W0129 15:58:13.386077 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod921df059_21ec_4ba7_9093_bbcc7c788991.slice/crio-2b37da97af7e488828e1712e1c4350ac8f492f01b7963e69f9bfbeb99cc158fe WatchSource:0}: Error finding container 2b37da97af7e488828e1712e1c4350ac8f492f01b7963e69f9bfbeb99cc158fe: Status 404 returned error can't find the container with id 2b37da97af7e488828e1712e1c4350ac8f492f01b7963e69f9bfbeb99cc158fe Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.827187 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bf77c6df-5tkqh"] Jan 29 15:58:13 crc kubenswrapper[4835]: W0129 15:58:13.849024 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41e0e4fe_b506_404a_940a_5d10b99221b0.slice/crio-2f23e07a7dab840ca8df4cea456a00af5f2645252652cb66e80a42f65997fde6 WatchSource:0}: Error finding container 2f23e07a7dab840ca8df4cea456a00af5f2645252652cb66e80a42f65997fde6: Status 404 returned error can't find the container with id 2f23e07a7dab840ca8df4cea456a00af5f2645252652cb66e80a42f65997fde6 Jan 29 15:58:13 crc kubenswrapper[4835]: I0129 15:58:13.896343 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-566b7cbc86-chzgb"] Jan 29 15:58:14 crc kubenswrapper[4835]: I0129 15:58:14.307293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" event={"ID":"41e0e4fe-b506-404a-940a-5d10b99221b0","Type":"ContainerStarted","Data":"2f23e07a7dab840ca8df4cea456a00af5f2645252652cb66e80a42f65997fde6"} Jan 29 15:58:14 crc kubenswrapper[4835]: I0129 15:58:14.308375 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" event={"ID":"921df059-21ec-4ba7-9093-bbcc7c788991","Type":"ContainerStarted","Data":"2b37da97af7e488828e1712e1c4350ac8f492f01b7963e69f9bfbeb99cc158fe"} Jan 29 15:58:14 crc kubenswrapper[4835]: I0129 15:58:14.309535 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerStarted","Data":"14ca0f1782abf35cecba1d01d62c137d92fd8eb78703ad7494275dbc7a115918"} Jan 29 15:58:14 crc kubenswrapper[4835]: I0129 15:58:14.311822 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" event={"ID":"c6c94dad-4a93-4c81-b197-781087dc0a8c","Type":"ContainerStarted","Data":"80c072e7b310027788dce8b0ec0008f139036bbb87a82bf19323cbbb69677262"} Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.205533 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-564bdc756-f7cnh"] Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.225772 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-564bdc756-f7cnh"] Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.225902 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.229223 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.231174 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.323139 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerStarted","Data":"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8"} Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.326104 4835 generic.go:334] "Generic (PLEG): container finished" podID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerID="9dd41f4fc4c9603f6667ef1e31dbfe08c055eccf11eef1c6c34a4eaeaf823dc3" exitCode=0 Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.326160 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" event={"ID":"41e0e4fe-b506-404a-940a-5d10b99221b0","Type":"ContainerDied","Data":"9dd41f4fc4c9603f6667ef1e31dbfe08c055eccf11eef1c6c34a4eaeaf823dc3"} Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.393360 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-combined-ca-bundle\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.393422 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83e68d7c-697f-41e0-94e8-99a198bf89ad-logs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.393775 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-public-tls-certs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.393916 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data-custom\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.394041 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.394174 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-internal-tls-certs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.394241 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm4jl\" (UniqueName: \"kubernetes.io/projected/83e68d7c-697f-41e0-94e8-99a198bf89ad-kube-api-access-cm4jl\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495314 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495404 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-internal-tls-certs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495430 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm4jl\" (UniqueName: \"kubernetes.io/projected/83e68d7c-697f-41e0-94e8-99a198bf89ad-kube-api-access-cm4jl\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495453 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-combined-ca-bundle\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495493 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83e68d7c-697f-41e0-94e8-99a198bf89ad-logs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495576 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-public-tls-certs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.495606 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data-custom\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.496834 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83e68d7c-697f-41e0-94e8-99a198bf89ad-logs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.500341 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data-custom\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.500739 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-internal-tls-certs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.501140 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-public-tls-certs\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.503170 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-combined-ca-bundle\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.503746 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.517191 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm4jl\" (UniqueName: \"kubernetes.io/projected/83e68d7c-697f-41e0-94e8-99a198bf89ad-kube-api-access-cm4jl\") pod \"barbican-api-564bdc756-f7cnh\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.571033 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.701168 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:58:15 crc kubenswrapper[4835]: I0129 15:58:15.701373 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.343364 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" event={"ID":"41e0e4fe-b506-404a-940a-5d10b99221b0","Type":"ContainerStarted","Data":"43c9eddc03d1c76b79784b5c9c84da3806538032d17725ebda520a1c3bfd2b81"} Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.351210 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerStarted","Data":"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166"} Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.351340 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.351372 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.369872 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" podStartSLOduration=4.369832258 podStartE2EDuration="4.369832258s" podCreationTimestamp="2026-01-29 15:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:16.361274863 +0000 UTC m=+1798.554318527" watchObservedRunningTime="2026-01-29 15:58:16.369832258 +0000 UTC m=+1798.562875942" Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.395943 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-566b7cbc86-chzgb" podStartSLOduration=4.395919461 podStartE2EDuration="4.395919461s" podCreationTimestamp="2026-01-29 15:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:16.388592787 +0000 UTC m=+1798.581636451" watchObservedRunningTime="2026-01-29 15:58:16.395919461 +0000 UTC m=+1798.588963115" Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.754032 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zhf9x" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:16 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:16 crc kubenswrapper[4835]: > Jan 29 15:58:16 crc kubenswrapper[4835]: I0129 15:58:16.882428 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-564bdc756-f7cnh"] Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.362542 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerStarted","Data":"d76a6aafbc03ceed32e448d0b63896b2dbe40764d4837150ddc617c101e05379"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.362911 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerStarted","Data":"14600d6710759fddf7a9e83ee289213ab1d2197385b5a1b0e2c5a007564a6d2e"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.365853 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerStarted","Data":"695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.370287 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" event={"ID":"c6c94dad-4a93-4c81-b197-781087dc0a8c","Type":"ContainerStarted","Data":"cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.370343 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" event={"ID":"c6c94dad-4a93-4c81-b197-781087dc0a8c","Type":"ContainerStarted","Data":"abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.372266 4835 generic.go:334] "Generic (PLEG): container finished" podID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" containerID="4c047d38e9f0fb7d24900c59d7d8a6bf6d2cdca2cca5301978c03cb19499c5bb" exitCode=0 Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.372339 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kkwct" event={"ID":"eea32b2d-5036-4574-8144-a3e3dd02fdf6","Type":"ContainerDied","Data":"4c047d38e9f0fb7d24900c59d7d8a6bf6d2cdca2cca5301978c03cb19499c5bb"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.374763 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" event={"ID":"921df059-21ec-4ba7-9093-bbcc7c788991","Type":"ContainerStarted","Data":"1d1e2e8d5b3cc6c604736856ef650f21234223173aea5d23d04e47451a48e118"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.374785 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" event={"ID":"921df059-21ec-4ba7-9093-bbcc7c788991","Type":"ContainerStarted","Data":"4038d593222cca32b6f2d81c1b1addde266dc5987b9461c587363594ca454f06"} Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.375333 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:17 crc kubenswrapper[4835]: I0129 15:58:17.491984 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:58:17 crc kubenswrapper[4835]: E0129 15:58:17.493345 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.401642 4835 generic.go:334] "Generic (PLEG): container finished" podID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerID="695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0" exitCode=0 Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.401727 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerDied","Data":"695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0"} Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.449668 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" podStartSLOduration=3.36615598 podStartE2EDuration="6.449645508s" podCreationTimestamp="2026-01-29 15:58:12 +0000 UTC" firstStartedPulling="2026-01-29 15:58:13.387775399 +0000 UTC m=+1795.580819043" lastFinishedPulling="2026-01-29 15:58:16.471264917 +0000 UTC m=+1798.664308571" observedRunningTime="2026-01-29 15:58:18.434617361 +0000 UTC m=+1800.627661015" watchObservedRunningTime="2026-01-29 15:58:18.449645508 +0000 UTC m=+1800.642689172" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.465323 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" podStartSLOduration=3.398514819 podStartE2EDuration="6.465299999s" podCreationTimestamp="2026-01-29 15:58:12 +0000 UTC" firstStartedPulling="2026-01-29 15:58:13.403920603 +0000 UTC m=+1795.596964257" lastFinishedPulling="2026-01-29 15:58:16.470705783 +0000 UTC m=+1798.663749437" observedRunningTime="2026-01-29 15:58:18.459986336 +0000 UTC m=+1800.653029990" watchObservedRunningTime="2026-01-29 15:58:18.465299999 +0000 UTC m=+1800.658343653" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.826023 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kkwct" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880112 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea32b2d-5036-4574-8144-a3e3dd02fdf6-etc-machine-id\") pod \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880233 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-db-sync-config-data\") pod \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880266 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5twv8\" (UniqueName: \"kubernetes.io/projected/eea32b2d-5036-4574-8144-a3e3dd02fdf6-kube-api-access-5twv8\") pod \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880315 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-config-data\") pod \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880225 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea32b2d-5036-4574-8144-a3e3dd02fdf6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eea32b2d-5036-4574-8144-a3e3dd02fdf6" (UID: "eea32b2d-5036-4574-8144-a3e3dd02fdf6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880345 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-combined-ca-bundle\") pod \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880518 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-scripts\") pod \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\" (UID: \"eea32b2d-5036-4574-8144-a3e3dd02fdf6\") " Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.880991 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eea32b2d-5036-4574-8144-a3e3dd02fdf6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.886586 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-scripts" (OuterVolumeSpecName: "scripts") pod "eea32b2d-5036-4574-8144-a3e3dd02fdf6" (UID: "eea32b2d-5036-4574-8144-a3e3dd02fdf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.886953 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eea32b2d-5036-4574-8144-a3e3dd02fdf6" (UID: "eea32b2d-5036-4574-8144-a3e3dd02fdf6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.887575 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea32b2d-5036-4574-8144-a3e3dd02fdf6-kube-api-access-5twv8" (OuterVolumeSpecName: "kube-api-access-5twv8") pod "eea32b2d-5036-4574-8144-a3e3dd02fdf6" (UID: "eea32b2d-5036-4574-8144-a3e3dd02fdf6"). InnerVolumeSpecName "kube-api-access-5twv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.931373 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eea32b2d-5036-4574-8144-a3e3dd02fdf6" (UID: "eea32b2d-5036-4574-8144-a3e3dd02fdf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.947696 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-config-data" (OuterVolumeSpecName: "config-data") pod "eea32b2d-5036-4574-8144-a3e3dd02fdf6" (UID: "eea32b2d-5036-4574-8144-a3e3dd02fdf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.982534 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.982572 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.982585 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5twv8\" (UniqueName: \"kubernetes.io/projected/eea32b2d-5036-4574-8144-a3e3dd02fdf6-kube-api-access-5twv8\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.982596 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:18 crc kubenswrapper[4835]: I0129 15:58:18.982607 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eea32b2d-5036-4574-8144-a3e3dd02fdf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.415954 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kkwct" event={"ID":"eea32b2d-5036-4574-8144-a3e3dd02fdf6","Type":"ContainerDied","Data":"34677a53488303b13b064e37d11c15670faf0693a9fd3732d1d8b6e159b44db7"} Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.416289 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34677a53488303b13b064e37d11c15670faf0693a9fd3732d1d8b6e159b44db7" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.416055 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kkwct" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.581123 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wpbd5"] Jan 29 15:58:19 crc kubenswrapper[4835]: E0129 15:58:19.581693 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" containerName="cinder-db-sync" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.581714 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" containerName="cinder-db-sync" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.581891 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" containerName="cinder-db-sync" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.582524 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.604859 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wpbd5"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.654434 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.655940 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.660033 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.660100 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.660177 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.660361 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-gjvlt" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.675652 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-370b-account-create-update-gspt4"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.677902 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.684743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.697874 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7vwx\" (UniqueName: \"kubernetes.io/projected/55d2384d-74ba-4054-8760-33b5d1fa47ed-kube-api-access-m7vwx\") pod \"nova-api-db-create-wpbd5\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.697916 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d2384d-74ba-4054-8760-33b5d1fa47ed-operator-scripts\") pod \"nova-api-db-create-wpbd5\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.701523 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9b4g2"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.702694 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.729967 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.755189 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-370b-account-create-update-gspt4"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.766594 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9b4g2"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799712 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d2384d-74ba-4054-8760-33b5d1fa47ed-operator-scripts\") pod \"nova-api-db-create-wpbd5\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799814 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7vwx\" (UniqueName: \"kubernetes.io/projected/55d2384d-74ba-4054-8760-33b5d1fa47ed-kube-api-access-m7vwx\") pod \"nova-api-db-create-wpbd5\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799857 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799887 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-operator-scripts\") pod \"nova-api-370b-account-create-update-gspt4\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799906 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799925 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3793fb25-49d3-4193-bf24-02fa97710d0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.799961 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8c2n\" (UniqueName: \"kubernetes.io/projected/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-kube-api-access-b8c2n\") pod \"nova-api-370b-account-create-update-gspt4\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.800028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbtqq\" (UniqueName: \"kubernetes.io/projected/3793fb25-49d3-4193-bf24-02fa97710d0b-kube-api-access-zbtqq\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.800048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.800809 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d2384d-74ba-4054-8760-33b5d1fa47ed-operator-scripts\") pod \"nova-api-db-create-wpbd5\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.806517 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bf77c6df-5tkqh"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.806741 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerName="dnsmasq-dns" containerID="cri-o://43c9eddc03d1c76b79784b5c9c84da3806538032d17725ebda520a1c3bfd2b81" gracePeriod=10 Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.814444 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76bb4997-f74rm"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.816146 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.821693 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7vwx\" (UniqueName: \"kubernetes.io/projected/55d2384d-74ba-4054-8760-33b5d1fa47ed-kube-api-access-m7vwx\") pod \"nova-api-db-create-wpbd5\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.829364 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76bb4997-f74rm"] Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.918104 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.921858 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.924370 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-operator-scripts\") pod \"nova-api-370b-account-create-update-gspt4\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.924522 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.924625 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3793fb25-49d3-4193-bf24-02fa97710d0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.924759 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8c2n\" (UniqueName: \"kubernetes.io/projected/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-kube-api-access-b8c2n\") pod \"nova-api-370b-account-create-update-gspt4\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.924979 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvllv\" (UniqueName: \"kubernetes.io/projected/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-kube-api-access-vvllv\") pod \"nova-cell0-db-create-9b4g2\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.925153 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-operator-scripts\") pod \"nova-cell0-db-create-9b4g2\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.925425 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbtqq\" (UniqueName: \"kubernetes.io/projected/3793fb25-49d3-4193-bf24-02fa97710d0b-kube-api-access-zbtqq\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.925659 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.920240 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.925307 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-operator-scripts\") pod \"nova-api-370b-account-create-update-gspt4\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.925248 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3793fb25-49d3-4193-bf24-02fa97710d0b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.940310 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.956630 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-scripts\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.984553 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:19 crc kubenswrapper[4835]: I0129 15:58:19.999450 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.000396 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbtqq\" (UniqueName: \"kubernetes.io/projected/3793fb25-49d3-4193-bf24-02fa97710d0b-kube-api-access-zbtqq\") pod \"cinder-scheduler-0\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.004085 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8c2n\" (UniqueName: \"kubernetes.io/projected/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-kube-api-access-b8c2n\") pod \"nova-api-370b-account-create-update-gspt4\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.011989 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-17d3-account-create-update-7zvtj"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.019446 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.023441 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030438 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-nb\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030517 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-svc\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030562 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strzn\" (UniqueName: \"kubernetes.io/projected/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-kube-api-access-strzn\") pod \"nova-cell0-17d3-account-create-update-7zvtj\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030638 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvllv\" (UniqueName: \"kubernetes.io/projected/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-kube-api-access-vvllv\") pod \"nova-cell0-db-create-9b4g2\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030667 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-sb\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030696 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hxbt\" (UniqueName: \"kubernetes.io/projected/fcced40f-58fc-451b-b87a-4d607febc581-kube-api-access-2hxbt\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030740 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-config\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-operator-scripts\") pod \"nova-cell0-db-create-9b4g2\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-operator-scripts\") pod \"nova-cell0-17d3-account-create-update-7zvtj\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.030913 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-swift-storage-0\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.032121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-operator-scripts\") pod \"nova-cell0-db-create-9b4g2\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.052865 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvllv\" (UniqueName: \"kubernetes.io/projected/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-kube-api-access-vvllv\") pod \"nova-cell0-db-create-9b4g2\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.054859 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lqgfx"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.056504 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.090176 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-17d3-account-create-update-7zvtj"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.105849 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lqgfx"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.130023 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.133945 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134327 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-swift-storage-0\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134378 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-nb\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134404 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-svc\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134435 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-strzn\" (UniqueName: \"kubernetes.io/projected/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-kube-api-access-strzn\") pod \"nova-cell0-17d3-account-create-update-7zvtj\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134501 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-sb\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134520 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hxbt\" (UniqueName: \"kubernetes.io/projected/fcced40f-58fc-451b-b87a-4d607febc581-kube-api-access-2hxbt\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134572 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d684c443-8def-40a8-b694-41ccaf6359b2-operator-scripts\") pod \"nova-cell1-db-create-lqgfx\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134602 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-config\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134721 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-operator-scripts\") pod \"nova-cell0-17d3-account-create-update-7zvtj\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.134771 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bll2v\" (UniqueName: \"kubernetes.io/projected/d684c443-8def-40a8-b694-41ccaf6359b2-kube-api-access-bll2v\") pod \"nova-cell1-db-create-lqgfx\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.135748 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-sb\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.135945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-swift-storage-0\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.136893 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-nb\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.137808 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-svc\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.138339 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.139230 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.139779 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-config\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.153905 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3dd7-account-create-update-k8qqd"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.155317 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.161482 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.164894 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3dd7-account-create-update-k8qqd"] Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.183226 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hxbt\" (UniqueName: \"kubernetes.io/projected/fcced40f-58fc-451b-b87a-4d607febc581-kube-api-access-2hxbt\") pod \"dnsmasq-dns-76bb4997-f74rm\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.186234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236311 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c695t\" (UniqueName: \"kubernetes.io/projected/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-kube-api-access-c695t\") pod \"nova-cell1-3dd7-account-create-update-k8qqd\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b576efc4-2651-4f93-baed-271c66e46be8-logs\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236452 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bll2v\" (UniqueName: \"kubernetes.io/projected/d684c443-8def-40a8-b694-41ccaf6359b2-kube-api-access-bll2v\") pod \"nova-cell1-db-create-lqgfx\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236512 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-scripts\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236566 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data-custom\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236610 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b576efc4-2651-4f93-baed-271c66e46be8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236727 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236771 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5746f\" (UniqueName: \"kubernetes.io/projected/b576efc4-2651-4f93-baed-271c66e46be8-kube-api-access-5746f\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236875 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-operator-scripts\") pod \"nova-cell1-3dd7-account-create-update-k8qqd\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.236921 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d684c443-8def-40a8-b694-41ccaf6359b2-operator-scripts\") pod \"nova-cell1-db-create-lqgfx\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.272125 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.294173 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.320702 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.338702 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data-custom\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.338773 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b576efc4-2651-4f93-baed-271c66e46be8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.338810 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.338837 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5746f\" (UniqueName: \"kubernetes.io/projected/b576efc4-2651-4f93-baed-271c66e46be8-kube-api-access-5746f\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.338886 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b576efc4-2651-4f93-baed-271c66e46be8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.338892 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-operator-scripts\") pod \"nova-cell1-3dd7-account-create-update-k8qqd\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.339310 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c695t\" (UniqueName: \"kubernetes.io/projected/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-kube-api-access-c695t\") pod \"nova-cell1-3dd7-account-create-update-k8qqd\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.339400 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b576efc4-2651-4f93-baed-271c66e46be8-logs\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.339703 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-scripts\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.339749 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.406514 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-operator-scripts\") pod \"nova-cell0-17d3-account-create-update-7zvtj\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.412059 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-strzn\" (UniqueName: \"kubernetes.io/projected/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-kube-api-access-strzn\") pod \"nova-cell0-17d3-account-create-update-7zvtj\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.414299 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d684c443-8def-40a8-b694-41ccaf6359b2-operator-scripts\") pod \"nova-cell1-db-create-lqgfx\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.415216 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b576efc4-2651-4f93-baed-271c66e46be8-logs\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.415605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-operator-scripts\") pod \"nova-cell1-3dd7-account-create-update-k8qqd\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.417419 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bll2v\" (UniqueName: \"kubernetes.io/projected/d684c443-8def-40a8-b694-41ccaf6359b2-kube-api-access-bll2v\") pod \"nova-cell1-db-create-lqgfx\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.417918 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.419155 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.419387 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-scripts\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.420313 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c695t\" (UniqueName: \"kubernetes.io/projected/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-kube-api-access-c695t\") pod \"nova-cell1-3dd7-account-create-update-k8qqd\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.421012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5746f\" (UniqueName: \"kubernetes.io/projected/b576efc4-2651-4f93-baed-271c66e46be8-kube-api-access-5746f\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.421422 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data-custom\") pod \"cinder-api-0\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.707435 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.711948 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.714896 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:58:20 crc kubenswrapper[4835]: I0129 15:58:20.715637 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.439883 4835 generic.go:334] "Generic (PLEG): container finished" podID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerID="43c9eddc03d1c76b79784b5c9c84da3806538032d17725ebda520a1c3bfd2b81" exitCode=0 Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.439947 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" event={"ID":"41e0e4fe-b506-404a-940a-5d10b99221b0","Type":"ContainerDied","Data":"43c9eddc03d1c76b79784b5c9c84da3806538032d17725ebda520a1c3bfd2b81"} Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.446826 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerStarted","Data":"1fa77cf0ee2adb7de043210724e572a07803b3db007e915a2b21ef69b093b4fb"} Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.447067 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.479683 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-564bdc756-f7cnh" podStartSLOduration=6.479664888 podStartE2EDuration="6.479664888s" podCreationTimestamp="2026-01-29 15:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:21.466587151 +0000 UTC m=+1803.659630805" watchObservedRunningTime="2026-01-29 15:58:21.479664888 +0000 UTC m=+1803.672708542" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.692690 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76bb4997-f74rm"] Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.787811 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.791014 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-17d3-account-create-update-7zvtj"] Jan 29 15:58:21 crc kubenswrapper[4835]: W0129 15:58:21.798817 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55d2384d_74ba_4054_8760_33b5d1fa47ed.slice/crio-5f6320421a0a9bbc64f28582319fb659abe8fff16dbe333eb66c4a12a3d881fe WatchSource:0}: Error finding container 5f6320421a0a9bbc64f28582319fb659abe8fff16dbe333eb66c4a12a3d881fe: Status 404 returned error can't find the container with id 5f6320421a0a9bbc64f28582319fb659abe8fff16dbe333eb66c4a12a3d881fe Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.800844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wpbd5"] Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.887824 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjrzs\" (UniqueName: \"kubernetes.io/projected/41e0e4fe-b506-404a-940a-5d10b99221b0-kube-api-access-wjrzs\") pod \"41e0e4fe-b506-404a-940a-5d10b99221b0\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.887977 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-sb\") pod \"41e0e4fe-b506-404a-940a-5d10b99221b0\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.888024 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-config\") pod \"41e0e4fe-b506-404a-940a-5d10b99221b0\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.888069 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-swift-storage-0\") pod \"41e0e4fe-b506-404a-940a-5d10b99221b0\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.888096 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-nb\") pod \"41e0e4fe-b506-404a-940a-5d10b99221b0\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.888166 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-svc\") pod \"41e0e4fe-b506-404a-940a-5d10b99221b0\" (UID: \"41e0e4fe-b506-404a-940a-5d10b99221b0\") " Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.894367 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e0e4fe-b506-404a-940a-5d10b99221b0-kube-api-access-wjrzs" (OuterVolumeSpecName: "kube-api-access-wjrzs") pod "41e0e4fe-b506-404a-940a-5d10b99221b0" (UID: "41e0e4fe-b506-404a-940a-5d10b99221b0"). InnerVolumeSpecName "kube-api-access-wjrzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.972102 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41e0e4fe-b506-404a-940a-5d10b99221b0" (UID: "41e0e4fe-b506-404a-940a-5d10b99221b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.973777 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41e0e4fe-b506-404a-940a-5d10b99221b0" (UID: "41e0e4fe-b506-404a-940a-5d10b99221b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.995495 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.995527 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:21 crc kubenswrapper[4835]: I0129 15:58:21.995539 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjrzs\" (UniqueName: \"kubernetes.io/projected/41e0e4fe-b506-404a-940a-5d10b99221b0-kube-api-access-wjrzs\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.017316 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41e0e4fe-b506-404a-940a-5d10b99221b0" (UID: "41e0e4fe-b506-404a-940a-5d10b99221b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.017426 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "41e0e4fe-b506-404a-940a-5d10b99221b0" (UID: "41e0e4fe-b506-404a-940a-5d10b99221b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.019187 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-config" (OuterVolumeSpecName: "config") pod "41e0e4fe-b506-404a-940a-5d10b99221b0" (UID: "41e0e4fe-b506-404a-940a-5d10b99221b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.071137 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.087705 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:22 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:22 crc kubenswrapper[4835]: > Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.100611 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.100643 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.100652 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41e0e4fe-b506-404a-940a-5d10b99221b0-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.146221 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3dd7-account-create-update-k8qqd"] Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.170142 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.182213 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-370b-account-create-update-gspt4"] Jan 29 15:58:22 crc kubenswrapper[4835]: W0129 15:58:22.189799 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a5379f6_0b77_4645_9cc1_aa6027b0ade8.slice/crio-e2a8b0b68c96f30fe373af94e338e4fe381d28f85907191431fcc33578177adc WatchSource:0}: Error finding container e2a8b0b68c96f30fe373af94e338e4fe381d28f85907191431fcc33578177adc: Status 404 returned error can't find the container with id e2a8b0b68c96f30fe373af94e338e4fe381d28f85907191431fcc33578177adc Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.192081 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9b4g2"] Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.199538 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lqgfx"] Jan 29 15:58:22 crc kubenswrapper[4835]: W0129 15:58:22.201078 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd684c443_8def_40a8_b694_41ccaf6359b2.slice/crio-5ed1ac7c4924a58f936449deca09bd99e1b33504ce551abca5fedbb26a025990 WatchSource:0}: Error finding container 5ed1ac7c4924a58f936449deca09bd99e1b33504ce551abca5fedbb26a025990: Status 404 returned error can't find the container with id 5ed1ac7c4924a58f936449deca09bd99e1b33504ce551abca5fedbb26a025990 Jan 29 15:58:22 crc kubenswrapper[4835]: W0129 15:58:22.206644 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07fc1ac9_38d8_4a80_837c_1d4daa8edecd.slice/crio-0f7faca867045ce27e03f629e2b507635b5fe4104203436b1f16415f90d68f28 WatchSource:0}: Error finding container 0f7faca867045ce27e03f629e2b507635b5fe4104203436b1f16415f90d68f28: Status 404 returned error can't find the container with id 0f7faca867045ce27e03f629e2b507635b5fe4104203436b1f16415f90d68f28 Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.207374 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:22 crc kubenswrapper[4835]: W0129 15:58:22.230867 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb576efc4_2651_4f93_baed_271c66e46be8.slice/crio-4c07c9934d19dac4286472d6a79ffb9a393993ef56ff750156b08cedf53880c5 WatchSource:0}: Error finding container 4c07c9934d19dac4286472d6a79ffb9a393993ef56ff750156b08cedf53880c5: Status 404 returned error can't find the container with id 4c07c9934d19dac4286472d6a79ffb9a393993ef56ff750156b08cedf53880c5 Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.457330 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b576efc4-2651-4f93-baed-271c66e46be8","Type":"ContainerStarted","Data":"4c07c9934d19dac4286472d6a79ffb9a393993ef56ff750156b08cedf53880c5"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.458836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" event={"ID":"d00de9b8-ac4d-473d-b1fe-0cfde39ed538","Type":"ContainerStarted","Data":"de02e023dc114fbad8a74f4ce095c78cfdc1a1265c35ec7871cdff82585a9818"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.460323 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb4997-f74rm" event={"ID":"fcced40f-58fc-451b-b87a-4d607febc581","Type":"ContainerStarted","Data":"682945b05db2dbd414f981c861dcc8cc41a2447345c1bb13575ef7328ee30f3d"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.461644 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3793fb25-49d3-4193-bf24-02fa97710d0b","Type":"ContainerStarted","Data":"fca7fe62adaf5a1eff708721ff8a4f2e5c464075dc2a4cb14c99ef8be678ce3a"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.463709 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" event={"ID":"41e0e4fe-b506-404a-940a-5d10b99221b0","Type":"ContainerDied","Data":"2f23e07a7dab840ca8df4cea456a00af5f2645252652cb66e80a42f65997fde6"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.463754 4835 scope.go:117] "RemoveContainer" containerID="43c9eddc03d1c76b79784b5c9c84da3806538032d17725ebda520a1c3bfd2b81" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.463887 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bf77c6df-5tkqh" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.476682 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpbd5" event={"ID":"55d2384d-74ba-4054-8760-33b5d1fa47ed","Type":"ContainerStarted","Data":"5f6320421a0a9bbc64f28582319fb659abe8fff16dbe333eb66c4a12a3d881fe"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.478634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9b4g2" event={"ID":"07fc1ac9-38d8-4a80-837c-1d4daa8edecd","Type":"ContainerStarted","Data":"0f7faca867045ce27e03f629e2b507635b5fe4104203436b1f16415f90d68f28"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.482352 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" event={"ID":"40d3c372-32e0-4fa4-8b75-a029eb3c27d6","Type":"ContainerStarted","Data":"29ec6141a2442550534f1d4aae07225358e931aa1e8f6b07dd3ef1d6ad7e31d3"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.486267 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-370b-account-create-update-gspt4" event={"ID":"4a5379f6-0b77-4645-9cc1-aa6027b0ade8","Type":"ContainerStarted","Data":"e2a8b0b68c96f30fe373af94e338e4fe381d28f85907191431fcc33578177adc"} Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.505158 4835 scope.go:117] "RemoveContainer" containerID="9dd41f4fc4c9603f6667ef1e31dbfe08c055eccf11eef1c6c34a4eaeaf823dc3" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.524067 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bf77c6df-5tkqh"] Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.524452 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.524512 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bf77c6df-5tkqh"] Jan 29 15:58:22 crc kubenswrapper[4835]: I0129 15:58:22.524534 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lqgfx" event={"ID":"d684c443-8def-40a8-b694-41ccaf6359b2","Type":"ContainerStarted","Data":"5ed1ac7c4924a58f936449deca09bd99e1b33504ce551abca5fedbb26a025990"} Jan 29 15:58:23 crc kubenswrapper[4835]: I0129 15:58:23.529835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lqgfx" event={"ID":"d684c443-8def-40a8-b694-41ccaf6359b2","Type":"ContainerStarted","Data":"61604143c8675d3e148a9653070e2523fa21b55358537c76a5a550c480dd6178"} Jan 29 15:58:23 crc kubenswrapper[4835]: I0129 15:58:23.532712 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpbd5" event={"ID":"55d2384d-74ba-4054-8760-33b5d1fa47ed","Type":"ContainerStarted","Data":"49a71a352ed53533222867f731b5079ce92b278c652a47b895136281569e2554"} Jan 29 15:58:23 crc kubenswrapper[4835]: I0129 15:58:23.534320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb4997-f74rm" event={"ID":"fcced40f-58fc-451b-b87a-4d607febc581","Type":"ContainerStarted","Data":"a178bbdb8daa123c4ccc54844248519caf6837bf8d072d1d1c023fe089ae4ea6"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.427756 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.504053 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" path="/var/lib/kubelet/pods/41e0e4fe-b506-404a-940a-5d10b99221b0/volumes" Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.543210 4835 generic.go:334] "Generic (PLEG): container finished" podID="fcced40f-58fc-451b-b87a-4d607febc581" containerID="a178bbdb8daa123c4ccc54844248519caf6837bf8d072d1d1c023fe089ae4ea6" exitCode=0 Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.543266 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb4997-f74rm" event={"ID":"fcced40f-58fc-451b-b87a-4d607febc581","Type":"ContainerDied","Data":"a178bbdb8daa123c4ccc54844248519caf6837bf8d072d1d1c023fe089ae4ea6"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.546497 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b576efc4-2651-4f93-baed-271c66e46be8","Type":"ContainerStarted","Data":"0aeab220ffeb52a93ec6a03d30fdc4d08cbcf2b4ec708911a3cf5e97169e246a"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.550041 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" event={"ID":"d00de9b8-ac4d-473d-b1fe-0cfde39ed538","Type":"ContainerStarted","Data":"fe35fc9fe00311f2a2a93d003c1ee89fd0b28151e8d0b023e21aa2fb9fec2a37"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.552082 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9b4g2" event={"ID":"07fc1ac9-38d8-4a80-837c-1d4daa8edecd","Type":"ContainerStarted","Data":"a73ec550d69bf4647d917ba77a530d4586b65fc08a4e29e5b8caebed151f48d7"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.553250 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" event={"ID":"40d3c372-32e0-4fa4-8b75-a029eb3c27d6","Type":"ContainerStarted","Data":"9bf70d93273d409efcf76ced575d4a904a19050bbe879a419756759665374eb8"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.555900 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-370b-account-create-update-gspt4" event={"ID":"4a5379f6-0b77-4645-9cc1-aa6027b0ade8","Type":"ContainerStarted","Data":"c2c0850d47993a63de0d0ffdab0a6d85f2eb69d829db94ee24b2af2e9703ece8"} Jan 29 15:58:24 crc kubenswrapper[4835]: I0129 15:58:24.593144 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-wpbd5" podStartSLOduration=5.593121187 podStartE2EDuration="5.593121187s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:24.580881571 +0000 UTC m=+1806.773925225" watchObservedRunningTime="2026-01-29 15:58:24.593121187 +0000 UTC m=+1806.786164841" Jan 29 15:58:25 crc kubenswrapper[4835]: I0129 15:58:25.582142 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-370b-account-create-update-gspt4" podStartSLOduration=6.582125407 podStartE2EDuration="6.582125407s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:25.578546558 +0000 UTC m=+1807.771590222" watchObservedRunningTime="2026-01-29 15:58:25.582125407 +0000 UTC m=+1807.775169061" Jan 29 15:58:25 crc kubenswrapper[4835]: I0129 15:58:25.597536 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" podStartSLOduration=6.597515633 podStartE2EDuration="6.597515633s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:25.595980954 +0000 UTC m=+1807.789024628" watchObservedRunningTime="2026-01-29 15:58:25.597515633 +0000 UTC m=+1807.790559287" Jan 29 15:58:25 crc kubenswrapper[4835]: I0129 15:58:25.618040 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" podStartSLOduration=6.618016616 podStartE2EDuration="6.618016616s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:25.608671102 +0000 UTC m=+1807.801714756" watchObservedRunningTime="2026-01-29 15:58:25.618016616 +0000 UTC m=+1807.811060270" Jan 29 15:58:25 crc kubenswrapper[4835]: I0129 15:58:25.625653 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-lqgfx" podStartSLOduration=6.625633147 podStartE2EDuration="6.625633147s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:25.622906808 +0000 UTC m=+1807.815950462" watchObservedRunningTime="2026-01-29 15:58:25.625633147 +0000 UTC m=+1807.818676811" Jan 29 15:58:25 crc kubenswrapper[4835]: I0129 15:58:25.644496 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-9b4g2" podStartSLOduration=6.644455408 podStartE2EDuration="6.644455408s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:25.636302684 +0000 UTC m=+1807.829346338" watchObservedRunningTime="2026-01-29 15:58:25.644455408 +0000 UTC m=+1807.837499062" Jan 29 15:58:26 crc kubenswrapper[4835]: I0129 15:58:26.756671 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zhf9x" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:26 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:26 crc kubenswrapper[4835]: > Jan 29 15:58:27 crc kubenswrapper[4835]: I0129 15:58:27.386218 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:27 crc kubenswrapper[4835]: I0129 15:58:27.558620 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:28 crc kubenswrapper[4835]: I0129 15:58:28.468767 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:28 crc kubenswrapper[4835]: I0129 15:58:28.468770 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:29 crc kubenswrapper[4835]: I0129 15:58:29.470699 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:29 crc kubenswrapper[4835]: I0129 15:58:29.492266 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:58:29 crc kubenswrapper[4835]: E0129 15:58:29.492593 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:58:29 crc kubenswrapper[4835]: I0129 15:58:29.577761 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:30 crc kubenswrapper[4835]: I0129 15:58:30.576937 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:31 crc kubenswrapper[4835]: I0129 15:58:31.631701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b576efc4-2651-4f93-baed-271c66e46be8","Type":"ContainerStarted","Data":"d67b46f0ac9235ecc15a432d766b5f7b37bce77e99242b8805146a71f5c3547b"} Jan 29 15:58:31 crc kubenswrapper[4835]: I0129 15:58:31.635863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerStarted","Data":"97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e"} Jan 29 15:58:31 crc kubenswrapper[4835]: I0129 15:58:31.638352 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb4997-f74rm" event={"ID":"fcced40f-58fc-451b-b87a-4d607febc581","Type":"ContainerStarted","Data":"6106f7585ea7f844099a776fe785fa3fb3409db8617c0c3ccf9f99bfa090ad01"} Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.045761 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:32 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:32 crc kubenswrapper[4835]: > Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.428823 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.564722 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.576653 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.647872 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.647993 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api" containerID="cri-o://d67b46f0ac9235ecc15a432d766b5f7b37bce77e99242b8805146a71f5c3547b" gracePeriod=30 Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.649638 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api-log" containerID="cri-o://0aeab220ffeb52a93ec6a03d30fdc4d08cbcf2b4ec708911a3cf5e97169e246a" gracePeriod=30 Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.678227 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=13.678208196 podStartE2EDuration="13.678208196s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:32.668999966 +0000 UTC m=+1814.862043630" watchObservedRunningTime="2026-01-29 15:58:32.678208196 +0000 UTC m=+1814.871251850" Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.695145 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76bb4997-f74rm" podStartSLOduration=13.69512192 podStartE2EDuration="13.69512192s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:32.693769646 +0000 UTC m=+1814.886813300" watchObservedRunningTime="2026-01-29 15:58:32.69512192 +0000 UTC m=+1814.888165594" Jan 29 15:58:32 crc kubenswrapper[4835]: I0129 15:58:32.719251 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hjvxh" podStartSLOduration=33.778507064 podStartE2EDuration="6m27.719231403s" podCreationTimestamp="2026-01-29 15:52:05 +0000 UTC" firstStartedPulling="2026-01-29 15:52:35.300885337 +0000 UTC m=+1457.493928991" lastFinishedPulling="2026-01-29 15:58:29.241609676 +0000 UTC m=+1811.434653330" observedRunningTime="2026-01-29 15:58:32.715046658 +0000 UTC m=+1814.908090322" watchObservedRunningTime="2026-01-29 15:58:32.719231403 +0000 UTC m=+1814.912275057" Jan 29 15:58:33 crc kubenswrapper[4835]: I0129 15:58:33.551703 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:33 crc kubenswrapper[4835]: I0129 15:58:33.551775 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:33 crc kubenswrapper[4835]: I0129 15:58:33.671867 4835 generic.go:334] "Generic (PLEG): container finished" podID="b576efc4-2651-4f93-baed-271c66e46be8" containerID="0aeab220ffeb52a93ec6a03d30fdc4d08cbcf2b4ec708911a3cf5e97169e246a" exitCode=143 Jan 29 15:58:33 crc kubenswrapper[4835]: I0129 15:58:33.672368 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b576efc4-2651-4f93-baed-271c66e46be8","Type":"ContainerDied","Data":"0aeab220ffeb52a93ec6a03d30fdc4d08cbcf2b4ec708911a3cf5e97169e246a"} Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.511896 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.512804 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.515295 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api-log" containerStatusID={"Type":"cri-o","ID":"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8"} pod="openstack/barbican-api-566b7cbc86-chzgb" containerMessage="Container barbican-api-log failed liveness probe, will be restarted" Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.515710 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" containerID="cri-o://a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8" gracePeriod=30 Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.581788 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.683088 4835 generic.go:334] "Generic (PLEG): container finished" podID="b576efc4-2651-4f93-baed-271c66e46be8" containerID="d67b46f0ac9235ecc15a432d766b5f7b37bce77e99242b8805146a71f5c3547b" exitCode=0 Jan 29 15:58:34 crc kubenswrapper[4835]: I0129 15:58:34.683205 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b576efc4-2651-4f93-baed-271c66e46be8","Type":"ContainerDied","Data":"d67b46f0ac9235ecc15a432d766b5f7b37bce77e99242b8805146a71f5c3547b"} Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.582662 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.694025 4835 generic.go:334] "Generic (PLEG): container finished" podID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerID="a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8" exitCode=143 Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.694067 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerDied","Data":"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8"} Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.716276 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.718424 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.168:8776/healthcheck\": dial tcp 10.217.0.168:8776: connect: connection refused" Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.994856 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:58:35 crc kubenswrapper[4835]: I0129 15:58:35.995342 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:58:36 crc kubenswrapper[4835]: I0129 15:58:36.754551 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zhf9x" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:36 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:36 crc kubenswrapper[4835]: > Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.039853 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:37 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:37 crc kubenswrapper[4835]: > Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.469697 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.470096 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.527028 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.569702 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.580803 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.607554 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data-custom\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.607635 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b576efc4-2651-4f93-baed-271c66e46be8-etc-machine-id\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.607727 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b576efc4-2651-4f93-baed-271c66e46be8-logs\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.607796 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5746f\" (UniqueName: \"kubernetes.io/projected/b576efc4-2651-4f93-baed-271c66e46be8-kube-api-access-5746f\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.607826 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.607959 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-scripts\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.608010 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-combined-ca-bundle\") pod \"b576efc4-2651-4f93-baed-271c66e46be8\" (UID: \"b576efc4-2651-4f93-baed-271c66e46be8\") " Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.608090 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b576efc4-2651-4f93-baed-271c66e46be8-logs" (OuterVolumeSpecName: "logs") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.608559 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b576efc4-2651-4f93-baed-271c66e46be8-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.608672 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b576efc4-2651-4f93-baed-271c66e46be8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.622813 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-scripts" (OuterVolumeSpecName: "scripts") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.622853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.631726 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b576efc4-2651-4f93-baed-271c66e46be8-kube-api-access-5746f" (OuterVolumeSpecName: "kube-api-access-5746f") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "kube-api-access-5746f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.663809 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data" (OuterVolumeSpecName: "config-data") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.676687 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b576efc4-2651-4f93-baed-271c66e46be8" (UID: "b576efc4-2651-4f93-baed-271c66e46be8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.709942 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5746f\" (UniqueName: \"kubernetes.io/projected/b576efc4-2651-4f93-baed-271c66e46be8-kube-api-access-5746f\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.709979 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.709990 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.709999 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.710023 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b576efc4-2651-4f93-baed-271c66e46be8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.710032 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b576efc4-2651-4f93-baed-271c66e46be8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.716844 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b576efc4-2651-4f93-baed-271c66e46be8","Type":"ContainerDied","Data":"4c07c9934d19dac4286472d6a79ffb9a393993ef56ff750156b08cedf53880c5"} Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.716903 4835 scope.go:117] "RemoveContainer" containerID="d67b46f0ac9235ecc15a432d766b5f7b37bce77e99242b8805146a71f5c3547b" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.716914 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.743738 4835 scope.go:117] "RemoveContainer" containerID="0aeab220ffeb52a93ec6a03d30fdc4d08cbcf2b4ec708911a3cf5e97169e246a" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.753154 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.783825 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.789963 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:37 crc kubenswrapper[4835]: E0129 15:58:37.790516 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790538 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api" Jan 29 15:58:37 crc kubenswrapper[4835]: E0129 15:58:37.790566 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api-log" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790576 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api-log" Jan 29 15:58:37 crc kubenswrapper[4835]: E0129 15:58:37.790589 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerName="dnsmasq-dns" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790597 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerName="dnsmasq-dns" Jan 29 15:58:37 crc kubenswrapper[4835]: E0129 15:58:37.790641 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerName="init" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790650 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerName="init" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790883 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790911 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e0e4fe-b506-404a-940a-5d10b99221b0" containerName="dnsmasq-dns" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.790938 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b576efc4-2651-4f93-baed-271c66e46be8" containerName="cinder-api-log" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.792759 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.796190 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.796267 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.796392 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.810104 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.912610 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.912662 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-scripts\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.912696 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.912747 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgbgx\" (UniqueName: \"kubernetes.io/projected/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-kube-api-access-wgbgx\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.912802 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.912856 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-logs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.913038 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.913187 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:37 crc kubenswrapper[4835]: I0129 15:58:37.913233 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015368 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015455 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015592 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015641 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015664 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-scripts\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015691 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015738 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgbgx\" (UniqueName: \"kubernetes.io/projected/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-kube-api-access-wgbgx\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015768 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015659 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.015821 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-logs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.016269 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-logs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.020459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.021280 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.021845 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data-custom\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.021977 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.022397 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-scripts\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.023685 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.032966 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgbgx\" (UniqueName: \"kubernetes.io/projected/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-kube-api-access-wgbgx\") pod \"cinder-api-0\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.117284 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.503099 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b576efc4-2651-4f93-baed-271c66e46be8" path="/var/lib/kubelet/pods/b576efc4-2651-4f93-baed-271c66e46be8/volumes" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.543821 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.635736 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.635755 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.730591 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def","Type":"ContainerStarted","Data":"e546f774a69c7bef220344ba61a7fb5ab11517a2f2fa79dd5619c93c427b5475"} Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.736265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerStarted","Data":"5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378"} Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.736963 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api" containerStatusID={"Type":"cri-o","ID":"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166"} pod="openstack/barbican-api-566b7cbc86-chzgb" containerMessage="Container barbican-api failed liveness probe, will be restarted" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.737014 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" containerID="cri-o://8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166" gracePeriod=30 Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.744280 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.744318 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.744383 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:38 crc kubenswrapper[4835]: I0129 15:58:38.745731 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": EOF" Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.586700 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.586797 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.587611 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api-log" containerStatusID={"Type":"cri-o","ID":"d76a6aafbc03ceed32e448d0b63896b2dbe40764d4837150ddc617c101e05379"} pod="openstack/barbican-api-564bdc756-f7cnh" containerMessage="Container barbican-api-log failed liveness probe, will be restarted" Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.587658 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" containerID="cri-o://d76a6aafbc03ceed32e448d0b63896b2dbe40764d4837150ddc617c101e05379" gracePeriod=30 Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.746677 4835 generic.go:334] "Generic (PLEG): container finished" podID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerID="d76a6aafbc03ceed32e448d0b63896b2dbe40764d4837150ddc617c101e05379" exitCode=143 Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.746771 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerDied","Data":"d76a6aafbc03ceed32e448d0b63896b2dbe40764d4837150ddc617c101e05379"} Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.748491 4835 generic.go:334] "Generic (PLEG): container finished" podID="d684c443-8def-40a8-b694-41ccaf6359b2" containerID="61604143c8675d3e148a9653070e2523fa21b55358537c76a5a550c480dd6178" exitCode=0 Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.748566 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lqgfx" event={"ID":"d684c443-8def-40a8-b694-41ccaf6359b2","Type":"ContainerDied","Data":"61604143c8675d3e148a9653070e2523fa21b55358537c76a5a550c480dd6178"} Jan 29 15:58:39 crc kubenswrapper[4835]: I0129 15:58:39.752188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def","Type":"ContainerStarted","Data":"7d39271af2af9a293c7e1b987bdaaac49715c41571364d6fe8ad55a46391979e"} Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.189910 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.282346 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8495b76777-n56gd"] Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.282743 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8495b76777-n56gd" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="dnsmasq-dns" containerID="cri-o://41b19aa8d9c8ac559b5bc4737b481e23888428560ba89159943ec3ed83441a34" gracePeriod=10 Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.484404 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.494343 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:58:40 crc kubenswrapper[4835]: E0129 15:58:40.494849 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.548092 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.551630 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.665197 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-566b7cbc86-chzgb"] Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.729077 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.735861 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8495b76777-n56gd" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: connect: connection refused" Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.811587 4835 generic.go:334] "Generic (PLEG): container finished" podID="c0f73761-fb6a-432f-b1af-740e344dd787" containerID="41b19aa8d9c8ac559b5bc4737b481e23888428560ba89159943ec3ed83441a34" exitCode=0 Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.811660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495b76777-n56gd" event={"ID":"c0f73761-fb6a-432f-b1af-740e344dd787","Type":"ContainerDied","Data":"41b19aa8d9c8ac559b5bc4737b481e23888428560ba89159943ec3ed83441a34"} Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.814928 4835 generic.go:334] "Generic (PLEG): container finished" podID="40d3c372-32e0-4fa4-8b75-a029eb3c27d6" containerID="9bf70d93273d409efcf76ced575d4a904a19050bbe879a419756759665374eb8" exitCode=0 Jan 29 15:58:40 crc kubenswrapper[4835]: I0129 15:58:40.815208 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" event={"ID":"40d3c372-32e0-4fa4-8b75-a029eb3c27d6","Type":"ContainerDied","Data":"9bf70d93273d409efcf76ced575d4a904a19050bbe879a419756759665374eb8"} Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.085448 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.211683 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.386741 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.532073 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d684c443-8def-40a8-b694-41ccaf6359b2-operator-scripts\") pod \"d684c443-8def-40a8-b694-41ccaf6359b2\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.532430 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bll2v\" (UniqueName: \"kubernetes.io/projected/d684c443-8def-40a8-b694-41ccaf6359b2-kube-api-access-bll2v\") pod \"d684c443-8def-40a8-b694-41ccaf6359b2\" (UID: \"d684c443-8def-40a8-b694-41ccaf6359b2\") " Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.533150 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d684c443-8def-40a8-b694-41ccaf6359b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d684c443-8def-40a8-b694-41ccaf6359b2" (UID: "d684c443-8def-40a8-b694-41ccaf6359b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.549230 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d684c443-8def-40a8-b694-41ccaf6359b2-kube-api-access-bll2v" (OuterVolumeSpecName: "kube-api-access-bll2v") pod "d684c443-8def-40a8-b694-41ccaf6359b2" (UID: "d684c443-8def-40a8-b694-41ccaf6359b2"). InnerVolumeSpecName "kube-api-access-bll2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.634601 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d684c443-8def-40a8-b694-41ccaf6359b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.634638 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bll2v\" (UniqueName: \"kubernetes.io/projected/d684c443-8def-40a8-b694-41ccaf6359b2-kube-api-access-bll2v\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.826217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerStarted","Data":"d537da9c327d41d1628f1e29eec86d0b3bf610d51630148342dc5dae69ebf200"} Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.828077 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lqgfx" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.828107 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lqgfx" event={"ID":"d684c443-8def-40a8-b694-41ccaf6359b2","Type":"ContainerDied","Data":"5ed1ac7c4924a58f936449deca09bd99e1b33504ce551abca5fedbb26a025990"} Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.828142 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ed1ac7c4924a58f936449deca09bd99e1b33504ce551abca5fedbb26a025990" Jan 29 15:58:41 crc kubenswrapper[4835]: I0129 15:58:41.903694 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pc9xw"] Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.370523 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.462282 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-operator-scripts\") pod \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.462614 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c695t\" (UniqueName: \"kubernetes.io/projected/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-kube-api-access-c695t\") pod \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\" (UID: \"40d3c372-32e0-4fa4-8b75-a029eb3c27d6\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.464623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40d3c372-32e0-4fa4-8b75-a029eb3c27d6" (UID: "40d3c372-32e0-4fa4-8b75-a029eb3c27d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.470833 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-kube-api-access-c695t" (OuterVolumeSpecName: "kube-api-access-c695t") pod "40d3c372-32e0-4fa4-8b75-a029eb3c27d6" (UID: "40d3c372-32e0-4fa4-8b75-a029eb3c27d6"). InnerVolumeSpecName "kube-api-access-c695t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.494517 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.565041 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.565078 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c695t\" (UniqueName: \"kubernetes.io/projected/40d3c372-32e0-4fa4-8b75-a029eb3c27d6-kube-api-access-c695t\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.665852 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdkdc\" (UniqueName: \"kubernetes.io/projected/c0f73761-fb6a-432f-b1af-740e344dd787-kube-api-access-hdkdc\") pod \"c0f73761-fb6a-432f-b1af-740e344dd787\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.666205 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-nb\") pod \"c0f73761-fb6a-432f-b1af-740e344dd787\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.666283 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-svc\") pod \"c0f73761-fb6a-432f-b1af-740e344dd787\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.666406 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-config\") pod \"c0f73761-fb6a-432f-b1af-740e344dd787\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.666500 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-sb\") pod \"c0f73761-fb6a-432f-b1af-740e344dd787\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.666519 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-swift-storage-0\") pod \"c0f73761-fb6a-432f-b1af-740e344dd787\" (UID: \"c0f73761-fb6a-432f-b1af-740e344dd787\") " Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.671335 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f73761-fb6a-432f-b1af-740e344dd787-kube-api-access-hdkdc" (OuterVolumeSpecName: "kube-api-access-hdkdc") pod "c0f73761-fb6a-432f-b1af-740e344dd787" (UID: "c0f73761-fb6a-432f-b1af-740e344dd787"). InnerVolumeSpecName "kube-api-access-hdkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.734094 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0f73761-fb6a-432f-b1af-740e344dd787" (UID: "c0f73761-fb6a-432f-b1af-740e344dd787"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.741526 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0f73761-fb6a-432f-b1af-740e344dd787" (UID: "c0f73761-fb6a-432f-b1af-740e344dd787"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.753991 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0f73761-fb6a-432f-b1af-740e344dd787" (UID: "c0f73761-fb6a-432f-b1af-740e344dd787"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.755665 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0f73761-fb6a-432f-b1af-740e344dd787" (UID: "c0f73761-fb6a-432f-b1af-740e344dd787"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.756601 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-config" (OuterVolumeSpecName: "config") pod "c0f73761-fb6a-432f-b1af-740e344dd787" (UID: "c0f73761-fb6a-432f-b1af-740e344dd787"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.770234 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.770273 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.770286 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdkdc\" (UniqueName: \"kubernetes.io/projected/c0f73761-fb6a-432f-b1af-740e344dd787-kube-api-access-hdkdc\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.770343 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.770358 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.770369 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0f73761-fb6a-432f-b1af-740e344dd787-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.855983 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8495b76777-n56gd" event={"ID":"c0f73761-fb6a-432f-b1af-740e344dd787","Type":"ContainerDied","Data":"bc145ddee211c85e6955c8ad0e37ca1800933effa646dce26cc1a814c9e36b99"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.856030 4835 scope.go:117] "RemoveContainer" containerID="41b19aa8d9c8ac559b5bc4737b481e23888428560ba89159943ec3ed83441a34" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.856175 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8495b76777-n56gd" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.872765 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def","Type":"ContainerStarted","Data":"5d95c6db7e0e718cab2c5db4b47a52f162d8fe1de2e34a697fb6ad3fbe3a9f22"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.873561 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.884540 4835 generic.go:334] "Generic (PLEG): container finished" podID="55d2384d-74ba-4054-8760-33b5d1fa47ed" containerID="49a71a352ed53533222867f731b5079ce92b278c652a47b895136281569e2554" exitCode=0 Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.884644 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpbd5" event={"ID":"55d2384d-74ba-4054-8760-33b5d1fa47ed","Type":"ContainerDied","Data":"49a71a352ed53533222867f731b5079ce92b278c652a47b895136281569e2554"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.888407 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3793fb25-49d3-4193-bf24-02fa97710d0b","Type":"ContainerStarted","Data":"fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.893833 4835 generic.go:334] "Generic (PLEG): container finished" podID="d00de9b8-ac4d-473d-b1fe-0cfde39ed538" containerID="fe35fc9fe00311f2a2a93d003c1ee89fd0b28151e8d0b023e21aa2fb9fec2a37" exitCode=0 Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.893928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" event={"ID":"d00de9b8-ac4d-473d-b1fe-0cfde39ed538","Type":"ContainerDied","Data":"fe35fc9fe00311f2a2a93d003c1ee89fd0b28151e8d0b023e21aa2fb9fec2a37"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.904565 4835 generic.go:334] "Generic (PLEG): container finished" podID="07fc1ac9-38d8-4a80-837c-1d4daa8edecd" containerID="a73ec550d69bf4647d917ba77a530d4586b65fc08a4e29e5b8caebed151f48d7" exitCode=0 Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.905315 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9b4g2" event={"ID":"07fc1ac9-38d8-4a80-837c-1d4daa8edecd","Type":"ContainerDied","Data":"a73ec550d69bf4647d917ba77a530d4586b65fc08a4e29e5b8caebed151f48d7"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.911730 4835 scope.go:117] "RemoveContainer" containerID="302cdb6ab279c1363cb013c59a10dedb3aaf537b566f16d25afe0add8c1ab43d" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.912926 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" event={"ID":"40d3c372-32e0-4fa4-8b75-a029eb3c27d6","Type":"ContainerDied","Data":"29ec6141a2442550534f1d4aae07225358e931aa1e8f6b07dd3ef1d6ad7e31d3"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.912967 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29ec6141a2442550534f1d4aae07225358e931aa1e8f6b07dd3ef1d6ad7e31d3" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.913022 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3dd7-account-create-update-k8qqd" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.915441 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.9154220859999995 podStartE2EDuration="5.915422086s" podCreationTimestamp="2026-01-29 15:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:42.902973555 +0000 UTC m=+1825.096017209" watchObservedRunningTime="2026-01-29 15:58:42.915422086 +0000 UTC m=+1825.108465740" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.925094 4835 generic.go:334] "Generic (PLEG): container finished" podID="4a5379f6-0b77-4645-9cc1-aa6027b0ade8" containerID="c2c0850d47993a63de0d0ffdab0a6d85f2eb69d829db94ee24b2af2e9703ece8" exitCode=0 Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.925283 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-370b-account-create-update-gspt4" event={"ID":"4a5379f6-0b77-4645-9cc1-aa6027b0ade8","Type":"ContainerDied","Data":"c2c0850d47993a63de0d0ffdab0a6d85f2eb69d829db94ee24b2af2e9703ece8"} Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.925784 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pc9xw" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" containerID="cri-o://6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031" gracePeriod=2 Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.925844 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.960141 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8495b76777-n56gd"] Jan 29 15:58:42 crc kubenswrapper[4835]: I0129 15:58:42.979843 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8495b76777-n56gd"] Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.445283 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.596421 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-catalog-content\") pod \"796e145f-c075-485b-91a5-c088b964fd44\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.596602 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-utilities\") pod \"796e145f-c075-485b-91a5-c088b964fd44\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.596722 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnp2c\" (UniqueName: \"kubernetes.io/projected/796e145f-c075-485b-91a5-c088b964fd44-kube-api-access-lnp2c\") pod \"796e145f-c075-485b-91a5-c088b964fd44\" (UID: \"796e145f-c075-485b-91a5-c088b964fd44\") " Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.610798 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-utilities" (OuterVolumeSpecName: "utilities") pod "796e145f-c075-485b-91a5-c088b964fd44" (UID: "796e145f-c075-485b-91a5-c088b964fd44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.627748 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.628039 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-central-agent" containerID="cri-o://0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697" gracePeriod=30 Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.628817 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="proxy-httpd" containerID="cri-o://a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4" gracePeriod=30 Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.628876 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="sg-core" containerID="cri-o://27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087" gracePeriod=30 Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.628912 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-notification-agent" containerID="cri-o://79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37" gracePeriod=30 Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.633865 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796e145f-c075-485b-91a5-c088b964fd44-kube-api-access-lnp2c" (OuterVolumeSpecName: "kube-api-access-lnp2c") pod "796e145f-c075-485b-91a5-c088b964fd44" (UID: "796e145f-c075-485b-91a5-c088b964fd44"). InnerVolumeSpecName "kube-api-access-lnp2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.670743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "796e145f-c075-485b-91a5-c088b964fd44" (UID: "796e145f-c075-485b-91a5-c088b964fd44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.700591 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnp2c\" (UniqueName: \"kubernetes.io/projected/796e145f-c075-485b-91a5-c088b964fd44-kube-api-access-lnp2c\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.700626 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.700643 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796e145f-c075-485b-91a5-c088b964fd44-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.790582 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.791038 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.983062 4835 generic.go:334] "Generic (PLEG): container finished" podID="796e145f-c075-485b-91a5-c088b964fd44" containerID="6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031" exitCode=0 Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.983432 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pc9xw" Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.983265 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerDied","Data":"6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031"} Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.983597 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pc9xw" event={"ID":"796e145f-c075-485b-91a5-c088b964fd44","Type":"ContainerDied","Data":"59f628fce7d0de30d45d4beba0d09716c2a313c7ca4687d655226faf6922cad1"} Jan 29 15:58:43 crc kubenswrapper[4835]: I0129 15:58:43.983621 4835 scope.go:117] "RemoveContainer" containerID="6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.027837 4835 generic.go:334] "Generic (PLEG): container finished" podID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerID="27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087" exitCode=2 Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.028063 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerDied","Data":"27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087"} Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.043417 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3793fb25-49d3-4193-bf24-02fa97710d0b","Type":"ContainerStarted","Data":"e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6"} Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.043478 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pc9xw"] Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.044544 4835 scope.go:117] "RemoveContainer" containerID="7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.088528 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pc9xw"] Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.088710 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.080220588 podStartE2EDuration="25.08868373s" podCreationTimestamp="2026-01-29 15:58:19 +0000 UTC" firstStartedPulling="2026-01-29 15:58:22.170252377 +0000 UTC m=+1804.363296051" lastFinishedPulling="2026-01-29 15:58:40.178715539 +0000 UTC m=+1822.371759193" observedRunningTime="2026-01-29 15:58:44.071092049 +0000 UTC m=+1826.264135703" watchObservedRunningTime="2026-01-29 15:58:44.08868373 +0000 UTC m=+1826.281727384" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.134953 4835 scope.go:117] "RemoveContainer" containerID="a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.243642 4835 scope.go:117] "RemoveContainer" containerID="6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031" Jan 29 15:58:44 crc kubenswrapper[4835]: E0129 15:58:44.251649 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031\": container with ID starting with 6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031 not found: ID does not exist" containerID="6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.251697 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031"} err="failed to get container status \"6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031\": rpc error: code = NotFound desc = could not find container \"6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031\": container with ID starting with 6ceb94121343c46e6b45f66c9e6459caab567e6d325ea0325acb236f23b2f031 not found: ID does not exist" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.251726 4835 scope.go:117] "RemoveContainer" containerID="7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f" Jan 29 15:58:44 crc kubenswrapper[4835]: E0129 15:58:44.255957 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f\": container with ID starting with 7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f not found: ID does not exist" containerID="7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.256006 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f"} err="failed to get container status \"7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f\": rpc error: code = NotFound desc = could not find container \"7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f\": container with ID starting with 7bc5838c207e1dbc9c1997ef6fe5ba6af71ffa81bf5a3145f061dafa09e35e5f not found: ID does not exist" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.256038 4835 scope.go:117] "RemoveContainer" containerID="a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f" Jan 29 15:58:44 crc kubenswrapper[4835]: E0129 15:58:44.258808 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f\": container with ID starting with a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f not found: ID does not exist" containerID="a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.258846 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f"} err="failed to get container status \"a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f\": rpc error: code = NotFound desc = could not find container \"a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f\": container with ID starting with a8b751e26767000bdf2e46e639eddbeccb9f33c06f7519ed7edf85e07f99319f not found: ID does not exist" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.507496 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796e145f-c075-485b-91a5-c088b964fd44" path="/var/lib/kubelet/pods/796e145f-c075-485b-91a5-c088b964fd44/volumes" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.509726 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" path="/var/lib/kubelet/pods/c0f73761-fb6a-432f-b1af-740e344dd787/volumes" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.684236 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.746156 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.784827 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.825180 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.836168 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7vwx\" (UniqueName: \"kubernetes.io/projected/55d2384d-74ba-4054-8760-33b5d1fa47ed-kube-api-access-m7vwx\") pod \"55d2384d-74ba-4054-8760-33b5d1fa47ed\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.836239 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d2384d-74ba-4054-8760-33b5d1fa47ed-operator-scripts\") pod \"55d2384d-74ba-4054-8760-33b5d1fa47ed\" (UID: \"55d2384d-74ba-4054-8760-33b5d1fa47ed\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.836328 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8c2n\" (UniqueName: \"kubernetes.io/projected/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-kube-api-access-b8c2n\") pod \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.836416 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-operator-scripts\") pod \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\" (UID: \"4a5379f6-0b77-4645-9cc1-aa6027b0ade8\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.836756 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55d2384d-74ba-4054-8760-33b5d1fa47ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55d2384d-74ba-4054-8760-33b5d1fa47ed" (UID: "55d2384d-74ba-4054-8760-33b5d1fa47ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.837144 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a5379f6-0b77-4645-9cc1-aa6027b0ade8" (UID: "4a5379f6-0b77-4645-9cc1-aa6027b0ade8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.837221 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55d2384d-74ba-4054-8760-33b5d1fa47ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.841743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-kube-api-access-b8c2n" (OuterVolumeSpecName: "kube-api-access-b8c2n") pod "4a5379f6-0b77-4645-9cc1-aa6027b0ade8" (UID: "4a5379f6-0b77-4645-9cc1-aa6027b0ade8"). InnerVolumeSpecName "kube-api-access-b8c2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.847706 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d2384d-74ba-4054-8760-33b5d1fa47ed-kube-api-access-m7vwx" (OuterVolumeSpecName: "kube-api-access-m7vwx") pod "55d2384d-74ba-4054-8760-33b5d1fa47ed" (UID: "55d2384d-74ba-4054-8760-33b5d1fa47ed"). InnerVolumeSpecName "kube-api-access-m7vwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.938746 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-strzn\" (UniqueName: \"kubernetes.io/projected/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-kube-api-access-strzn\") pod \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.938919 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvllv\" (UniqueName: \"kubernetes.io/projected/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-kube-api-access-vvllv\") pod \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.938975 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-operator-scripts\") pod \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\" (UID: \"d00de9b8-ac4d-473d-b1fe-0cfde39ed538\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.939001 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-operator-scripts\") pod \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\" (UID: \"07fc1ac9-38d8-4a80-837c-1d4daa8edecd\") " Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.939499 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.939520 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7vwx\" (UniqueName: \"kubernetes.io/projected/55d2384d-74ba-4054-8760-33b5d1fa47ed-kube-api-access-m7vwx\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.939533 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8c2n\" (UniqueName: \"kubernetes.io/projected/4a5379f6-0b77-4645-9cc1-aa6027b0ade8-kube-api-access-b8c2n\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.944390 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d00de9b8-ac4d-473d-b1fe-0cfde39ed538" (UID: "d00de9b8-ac4d-473d-b1fe-0cfde39ed538"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.947961 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-kube-api-access-vvllv" (OuterVolumeSpecName: "kube-api-access-vvllv") pod "07fc1ac9-38d8-4a80-837c-1d4daa8edecd" (UID: "07fc1ac9-38d8-4a80-837c-1d4daa8edecd"). InnerVolumeSpecName "kube-api-access-vvllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.949687 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-kube-api-access-strzn" (OuterVolumeSpecName: "kube-api-access-strzn") pod "d00de9b8-ac4d-473d-b1fe-0cfde39ed538" (UID: "d00de9b8-ac4d-473d-b1fe-0cfde39ed538"). InnerVolumeSpecName "kube-api-access-strzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:44 crc kubenswrapper[4835]: I0129 15:58:44.952154 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07fc1ac9-38d8-4a80-837c-1d4daa8edecd" (UID: "07fc1ac9-38d8-4a80-837c-1d4daa8edecd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.050271 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvllv\" (UniqueName: \"kubernetes.io/projected/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-kube-api-access-vvllv\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.050295 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.050305 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc1ac9-38d8-4a80-837c-1d4daa8edecd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.050315 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-strzn\" (UniqueName: \"kubernetes.io/projected/d00de9b8-ac4d-473d-b1fe-0cfde39ed538-kube-api-access-strzn\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.077622 4835 generic.go:334] "Generic (PLEG): container finished" podID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerID="a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4" exitCode=0 Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.077651 4835 generic.go:334] "Generic (PLEG): container finished" podID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerID="0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697" exitCode=0 Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.077698 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerDied","Data":"a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4"} Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.077733 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerDied","Data":"0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697"} Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.079360 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9b4g2" event={"ID":"07fc1ac9-38d8-4a80-837c-1d4daa8edecd","Type":"ContainerDied","Data":"0f7faca867045ce27e03f629e2b507635b5fe4104203436b1f16415f90d68f28"} Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.079382 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7faca867045ce27e03f629e2b507635b5fe4104203436b1f16415f90d68f28" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.079448 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9b4g2" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.081583 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-370b-account-create-update-gspt4" event={"ID":"4a5379f6-0b77-4645-9cc1-aa6027b0ade8","Type":"ContainerDied","Data":"e2a8b0b68c96f30fe373af94e338e4fe381d28f85907191431fcc33578177adc"} Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.081633 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a8b0b68c96f30fe373af94e338e4fe381d28f85907191431fcc33578177adc" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.081702 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-370b-account-create-update-gspt4" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.087766 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpbd5" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.087789 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpbd5" event={"ID":"55d2384d-74ba-4054-8760-33b5d1fa47ed","Type":"ContainerDied","Data":"5f6320421a0a9bbc64f28582319fb659abe8fff16dbe333eb66c4a12a3d881fe"} Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.087829 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f6320421a0a9bbc64f28582319fb659abe8fff16dbe333eb66c4a12a3d881fe" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.091761 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.091755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-17d3-account-create-update-7zvtj" event={"ID":"d00de9b8-ac4d-473d-b1fe-0cfde39ed538","Type":"ContainerDied","Data":"de02e023dc114fbad8a74f4ce095c78cfdc1a1265c35ec7871cdff82585a9818"} Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.092115 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de02e023dc114fbad8a74f4ce095c78cfdc1a1265c35ec7871cdff82585a9818" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.192666 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:32962->10.217.0.159:9311: read: connection reset by peer" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.192654 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:32976->10.217.0.159:9311: read: connection reset by peer" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.192682 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": read tcp 10.217.0.2:32960->10.217.0.159:9311: read: connection reset by peer" Jan 29 15:58:45 crc kubenswrapper[4835]: I0129 15:58:45.272646 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.047174 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.158242 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.163079 4835 generic.go:334] "Generic (PLEG): container finished" podID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerID="8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166" exitCode=0 Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.163119 4835 generic.go:334] "Generic (PLEG): container finished" podID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerID="a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43" exitCode=1 Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.163314 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerDied","Data":"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166"} Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.163345 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerDied","Data":"a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43"} Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.163376 4835 scope.go:117] "RemoveContainer" containerID="8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.173499 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" containerID="cri-o://5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378" gracePeriod=30 Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.173779 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-566b7cbc86-chzgb" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.159:9311/healthcheck\": dial tcp 10.217.0.159:9311: connect: connection refused" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.308674 4835 scope.go:117] "RemoveContainer" containerID="8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166" Jan 29 15:58:46 crc kubenswrapper[4835]: E0129 15:58:46.315655 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166\": container with ID starting with 8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166 not found: ID does not exist" containerID="8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.315714 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166"} err="failed to get container status \"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166\": rpc error: code = NotFound desc = could not find container \"8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166\": container with ID starting with 8543a05b49c593a06f4340a9e5903ee0ddd07a04562ff4e802c6518b183b8166 not found: ID does not exist" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.749575 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zhf9x" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" probeResult="failure" output=< Jan 29 15:58:46 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 15:58:46 crc kubenswrapper[4835]: > Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.791149 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.887895 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a882d76f-a62a-480e-84f6-5fc1b766ab86-logs\") pod \"a882d76f-a62a-480e-84f6-5fc1b766ab86\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.888066 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data-custom\") pod \"a882d76f-a62a-480e-84f6-5fc1b766ab86\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.888240 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spnnb\" (UniqueName: \"kubernetes.io/projected/a882d76f-a62a-480e-84f6-5fc1b766ab86-kube-api-access-spnnb\") pod \"a882d76f-a62a-480e-84f6-5fc1b766ab86\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.888297 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-combined-ca-bundle\") pod \"a882d76f-a62a-480e-84f6-5fc1b766ab86\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.888354 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data\") pod \"a882d76f-a62a-480e-84f6-5fc1b766ab86\" (UID: \"a882d76f-a62a-480e-84f6-5fc1b766ab86\") " Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.888679 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a882d76f-a62a-480e-84f6-5fc1b766ab86-logs" (OuterVolumeSpecName: "logs") pod "a882d76f-a62a-480e-84f6-5fc1b766ab86" (UID: "a882d76f-a62a-480e-84f6-5fc1b766ab86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.890278 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a882d76f-a62a-480e-84f6-5fc1b766ab86-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.894785 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a882d76f-a62a-480e-84f6-5fc1b766ab86-kube-api-access-spnnb" (OuterVolumeSpecName: "kube-api-access-spnnb") pod "a882d76f-a62a-480e-84f6-5fc1b766ab86" (UID: "a882d76f-a62a-480e-84f6-5fc1b766ab86"). InnerVolumeSpecName "kube-api-access-spnnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.895615 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a882d76f-a62a-480e-84f6-5fc1b766ab86" (UID: "a882d76f-a62a-480e-84f6-5fc1b766ab86"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.918309 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a882d76f-a62a-480e-84f6-5fc1b766ab86" (UID: "a882d76f-a62a-480e-84f6-5fc1b766ab86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.951278 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data" (OuterVolumeSpecName: "config-data") pod "a882d76f-a62a-480e-84f6-5fc1b766ab86" (UID: "a882d76f-a62a-480e-84f6-5fc1b766ab86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.992513 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.992556 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.992566 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a882d76f-a62a-480e-84f6-5fc1b766ab86-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:46 crc kubenswrapper[4835]: I0129 15:58:46.992575 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spnnb\" (UniqueName: \"kubernetes.io/projected/a882d76f-a62a-480e-84f6-5fc1b766ab86-kube-api-access-spnnb\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.173808 4835 generic.go:334] "Generic (PLEG): container finished" podID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerID="5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378" exitCode=143 Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.174025 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerDied","Data":"5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378"} Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.174199 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-566b7cbc86-chzgb" event={"ID":"a882d76f-a62a-480e-84f6-5fc1b766ab86","Type":"ContainerDied","Data":"14ca0f1782abf35cecba1d01d62c137d92fd8eb78703ad7494275dbc7a115918"} Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.174224 4835 scope.go:117] "RemoveContainer" containerID="a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.174101 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-566b7cbc86-chzgb" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.215616 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-566b7cbc86-chzgb"] Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.229179 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-566b7cbc86-chzgb"] Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.229365 4835 scope.go:117] "RemoveContainer" containerID="5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.259199 4835 scope.go:117] "RemoveContainer" containerID="a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.279601 4835 scope.go:117] "RemoveContainer" containerID="a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43" Jan 29 15:58:47 crc kubenswrapper[4835]: E0129 15:58:47.280204 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43\": container with ID starting with a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43 not found: ID does not exist" containerID="a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.280255 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43"} err="failed to get container status \"a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43\": rpc error: code = NotFound desc = could not find container \"a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43\": container with ID starting with a0120d8f81868434500fcb91f191f97ed88a1f9724f0fd1fb16908382b9cbe43 not found: ID does not exist" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.280280 4835 scope.go:117] "RemoveContainer" containerID="5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378" Jan 29 15:58:47 crc kubenswrapper[4835]: E0129 15:58:47.280810 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378\": container with ID starting with 5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378 not found: ID does not exist" containerID="5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.280856 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378"} err="failed to get container status \"5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378\": rpc error: code = NotFound desc = could not find container \"5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378\": container with ID starting with 5412896f1b40b6fb88b3338402d0360d0b4436b94f93f49434f0778743ae9378 not found: ID does not exist" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.280884 4835 scope.go:117] "RemoveContainer" containerID="a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8" Jan 29 15:58:47 crc kubenswrapper[4835]: E0129 15:58:47.281241 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8\": container with ID starting with a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8 not found: ID does not exist" containerID="a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.281288 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8"} err="failed to get container status \"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8\": rpc error: code = NotFound desc = could not find container \"a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8\": container with ID starting with a5c064ecda2ded257e6104f4995886cd11a98535c3eba0af07bf4d228f19efa8 not found: ID does not exist" Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.302357 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hjvxh"] Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.302596 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hjvxh" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="registry-server" containerID="cri-o://97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e" gracePeriod=2 Jan 29 15:58:47 crc kubenswrapper[4835]: I0129 15:58:47.861155 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.010772 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mglp\" (UniqueName: \"kubernetes.io/projected/7c0649a1-77a6-4e5c-a6f1-d96480a91155-kube-api-access-4mglp\") pod \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.010826 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-catalog-content\") pod \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.010884 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-utilities\") pod \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\" (UID: \"7c0649a1-77a6-4e5c-a6f1-d96480a91155\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.012395 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-utilities" (OuterVolumeSpecName: "utilities") pod "7c0649a1-77a6-4e5c-a6f1-d96480a91155" (UID: "7c0649a1-77a6-4e5c-a6f1-d96480a91155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.022828 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0649a1-77a6-4e5c-a6f1-d96480a91155-kube-api-access-4mglp" (OuterVolumeSpecName: "kube-api-access-4mglp") pod "7c0649a1-77a6-4e5c-a6f1-d96480a91155" (UID: "7c0649a1-77a6-4e5c-a6f1-d96480a91155"). InnerVolumeSpecName "kube-api-access-4mglp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.077727 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.112872 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mglp\" (UniqueName: \"kubernetes.io/projected/7c0649a1-77a6-4e5c-a6f1-d96480a91155-kube-api-access-4mglp\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.112911 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.142213 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c0649a1-77a6-4e5c-a6f1-d96480a91155" (UID: "7c0649a1-77a6-4e5c-a6f1-d96480a91155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.188016 4835 generic.go:334] "Generic (PLEG): container finished" podID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" containerID="6b7329a1e06a9569e545697f0b7b3b8715cccd53175de5de3306fab6c063fed9" exitCode=0 Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.188113 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnnnq" event={"ID":"0830b0ce-038c-41ff-bed9-e4efa49ae35f","Type":"ContainerDied","Data":"6b7329a1e06a9569e545697f0b7b3b8715cccd53175de5de3306fab6c063fed9"} Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.195150 4835 generic.go:334] "Generic (PLEG): container finished" podID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerID="97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e" exitCode=0 Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.195238 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerDied","Data":"97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e"} Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.195250 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvxh" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.195273 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvxh" event={"ID":"7c0649a1-77a6-4e5c-a6f1-d96480a91155","Type":"ContainerDied","Data":"d25f730a17a3a5ac6f2cc9f845c97394ee79ea430e1f4daa440db54fa09251f5"} Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.195301 4835 scope.go:117] "RemoveContainer" containerID="97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.203363 4835 generic.go:334] "Generic (PLEG): container finished" podID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerID="79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37" exitCode=0 Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.203517 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.203538 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerDied","Data":"79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37"} Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.203820 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be50479d-0914-4f6c-a9b7-57dd363a38df","Type":"ContainerDied","Data":"9460cef778bf50373b792c6aa60d2d20e33a5b49c8fb7d7f292cfac73371c2c1"} Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.213979 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-sg-core-conf-yaml\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214033 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-config-data\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214076 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-scripts\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-run-httpd\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214165 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-combined-ca-bundle\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214208 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xc9b\" (UniqueName: \"kubernetes.io/projected/be50479d-0914-4f6c-a9b7-57dd363a38df-kube-api-access-5xc9b\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-log-httpd\") pod \"be50479d-0914-4f6c-a9b7-57dd363a38df\" (UID: \"be50479d-0914-4f6c-a9b7-57dd363a38df\") " Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.214589 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c0649a1-77a6-4e5c-a6f1-d96480a91155-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.215002 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.215254 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.220402 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-scripts" (OuterVolumeSpecName: "scripts") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.226747 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be50479d-0914-4f6c-a9b7-57dd363a38df-kube-api-access-5xc9b" (OuterVolumeSpecName: "kube-api-access-5xc9b") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "kube-api-access-5xc9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.228837 4835 scope.go:117] "RemoveContainer" containerID="695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.248785 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.318876 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xc9b\" (UniqueName: \"kubernetes.io/projected/be50479d-0914-4f6c-a9b7-57dd363a38df-kube-api-access-5xc9b\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.318911 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.318982 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.319015 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.319025 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be50479d-0914-4f6c-a9b7-57dd363a38df-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.320436 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-config-data" (OuterVolumeSpecName: "config-data") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.331386 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be50479d-0914-4f6c-a9b7-57dd363a38df" (UID: "be50479d-0914-4f6c-a9b7-57dd363a38df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.402966 4835 scope.go:117] "RemoveContainer" containerID="96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.420037 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hjvxh"] Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.421356 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.421376 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be50479d-0914-4f6c-a9b7-57dd363a38df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.430925 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hjvxh"] Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.434305 4835 scope.go:117] "RemoveContainer" containerID="97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.435914 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e\": container with ID starting with 97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e not found: ID does not exist" containerID="97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.435961 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e"} err="failed to get container status \"97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e\": rpc error: code = NotFound desc = could not find container \"97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e\": container with ID starting with 97aa30062d873ca3caccee7f2985fb36659178e4a59df27881d21905252ad78e not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.436040 4835 scope.go:117] "RemoveContainer" containerID="695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.439410 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0\": container with ID starting with 695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0 not found: ID does not exist" containerID="695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.439439 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0"} err="failed to get container status \"695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0\": rpc error: code = NotFound desc = could not find container \"695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0\": container with ID starting with 695f6d9721c508d8ca4ac8677815be2b006232e12cbcf8938884b1213f8fa6d0 not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.439457 4835 scope.go:117] "RemoveContainer" containerID="96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.439913 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b\": container with ID starting with 96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b not found: ID does not exist" containerID="96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.439969 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b"} err="failed to get container status \"96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b\": rpc error: code = NotFound desc = could not find container \"96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b\": container with ID starting with 96b554fd6e5e0813b3e0f3569b6bdd3af37cf28306f0ad15e969dfe18225b02b not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.440004 4835 scope.go:117] "RemoveContainer" containerID="a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.462436 4835 scope.go:117] "RemoveContainer" containerID="27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.505696 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" path="/var/lib/kubelet/pods/7c0649a1-77a6-4e5c-a6f1-d96480a91155/volumes" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.507055 4835 scope.go:117] "RemoveContainer" containerID="79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.507297 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" path="/var/lib/kubelet/pods/a882d76f-a62a-480e-84f6-5fc1b766ab86/volumes" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.535087 4835 scope.go:117] "RemoveContainer" containerID="0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.565362 4835 scope.go:117] "RemoveContainer" containerID="a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.565598 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.570951 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4\": container with ID starting with a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4 not found: ID does not exist" containerID="a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.570999 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4"} err="failed to get container status \"a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4\": rpc error: code = NotFound desc = could not find container \"a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4\": container with ID starting with a1bf7d2773d43cdf55d7634c88a5ec0bed605f6677e75d72e477f4cc166891a4 not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.571029 4835 scope.go:117] "RemoveContainer" containerID="27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.578002 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.578159 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087\": container with ID starting with 27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087 not found: ID does not exist" containerID="27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.578207 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087"} err="failed to get container status \"27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087\": rpc error: code = NotFound desc = could not find container \"27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087\": container with ID starting with 27ccbcd9e8cbf63195526768edabe64c99613d9dfcb4f007afcc8d28d6327087 not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.578238 4835 scope.go:117] "RemoveContainer" containerID="79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.578581 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37\": container with ID starting with 79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37 not found: ID does not exist" containerID="79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.578613 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37"} err="failed to get container status \"79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37\": rpc error: code = NotFound desc = could not find container \"79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37\": container with ID starting with 79c26edf6b42efbe792329cdc861f4abc54999d62c3522c890d30f78049e0f37 not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.578631 4835 scope.go:117] "RemoveContainer" containerID="0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.580104 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697\": container with ID starting with 0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697 not found: ID does not exist" containerID="0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.580143 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697"} err="failed to get container status \"0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697\": rpc error: code = NotFound desc = could not find container \"0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697\": container with ID starting with 0f125a15b5beab39fc901724583d408ee893ab737edbe89db693480cc2c66697 not found: ID does not exist" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595297 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595757 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fc1ac9-38d8-4a80-837c-1d4daa8edecd" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595781 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fc1ac9-38d8-4a80-837c-1d4daa8edecd" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595815 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d2384d-74ba-4054-8760-33b5d1fa47ed" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595827 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d2384d-74ba-4054-8760-33b5d1fa47ed" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595840 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d684c443-8def-40a8-b694-41ccaf6359b2" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595848 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d684c443-8def-40a8-b694-41ccaf6359b2" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595864 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-central-agent" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595872 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-central-agent" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595880 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-notification-agent" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595889 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-notification-agent" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595902 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="extract-content" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595910 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="extract-content" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595922 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="dnsmasq-dns" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595930 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="dnsmasq-dns" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595953 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="extract-utilities" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595961 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="extract-utilities" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.595982 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.595992 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596007 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="proxy-httpd" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596015 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="proxy-httpd" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596026 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40d3c372-32e0-4fa4-8b75-a029eb3c27d6" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596034 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="40d3c372-32e0-4fa4-8b75-a029eb3c27d6" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596043 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596050 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596063 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="registry-server" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596071 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="registry-server" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596081 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="extract-content" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596087 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="extract-content" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596099 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="extract-utilities" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596106 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="extract-utilities" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596115 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596122 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596131 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596138 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596147 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00de9b8-ac4d-473d-b1fe-0cfde39ed538" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596154 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00de9b8-ac4d-473d-b1fe-0cfde39ed538" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596171 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5379f6-0b77-4645-9cc1-aa6027b0ade8" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596178 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5379f6-0b77-4645-9cc1-aa6027b0ade8" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596191 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="init" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596198 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="init" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596213 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596221 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" Jan 29 15:58:48 crc kubenswrapper[4835]: E0129 15:58:48.596235 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="sg-core" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596243 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="sg-core" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596488 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5379f6-0b77-4645-9cc1-aa6027b0ade8" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596511 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-notification-agent" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596527 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fc1ac9-38d8-4a80-837c-1d4daa8edecd" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596544 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="sg-core" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596559 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596571 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0649a1-77a6-4e5c-a6f1-d96480a91155" containerName="registry-server" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596583 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="40d3c372-32e0-4fa4-8b75-a029eb3c27d6" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596592 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00de9b8-ac4d-473d-b1fe-0cfde39ed538" containerName="mariadb-account-create-update" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596603 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d684c443-8def-40a8-b694-41ccaf6359b2" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596616 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="ceilometer-central-agent" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596627 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="796e145f-c075-485b-91a5-c088b964fd44" containerName="registry-server" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596636 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f73761-fb6a-432f-b1af-740e344dd787" containerName="dnsmasq-dns" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596647 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596657 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api-log" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596668 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d2384d-74ba-4054-8760-33b5d1fa47ed" containerName="mariadb-database-create" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.596679 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" containerName="proxy-httpd" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.597143 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a882d76f-a62a-480e-84f6-5fc1b766ab86" containerName="barbican-api" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.598422 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.605435 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.618909 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.619863 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726101 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-log-httpd\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726250 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726361 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-run-httpd\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726583 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-scripts\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726611 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjks\" (UniqueName: \"kubernetes.io/projected/90a18ee1-6e48-4cb1-9688-54de76668cca-kube-api-access-8gjks\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726642 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-config-data\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.726669 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.827815 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-log-httpd\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828166 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-run-httpd\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828344 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-log-httpd\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828515 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-scripts\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828573 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-run-httpd\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828643 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gjks\" (UniqueName: \"kubernetes.io/projected/90a18ee1-6e48-4cb1-9688-54de76668cca-kube-api-access-8gjks\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828719 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-config-data\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.828799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.832152 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-scripts\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.832180 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.833580 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-config-data\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.834924 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.845963 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gjks\" (UniqueName: \"kubernetes.io/projected/90a18ee1-6e48-4cb1-9688-54de76668cca-kube-api-access-8gjks\") pod \"ceilometer-0\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " pod="openstack/ceilometer-0" Jan 29 15:58:48 crc kubenswrapper[4835]: I0129 15:58:48.934515 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:58:49 crc kubenswrapper[4835]: I0129 15:58:49.420169 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.102998 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnnnq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.243602 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerStarted","Data":"ff45a3952b2301782ffecaf6a08731db105eb4428e9f93d74358493d259c7ca1"} Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.254796 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gnnnq" event={"ID":"0830b0ce-038c-41ff-bed9-e4efa49ae35f","Type":"ContainerDied","Data":"07e8ed5ec6d1eaae335755ff1f0eb989ed70a4e0dc5b28a38734efbaa3a14346"} Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.254844 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07e8ed5ec6d1eaae335755ff1f0eb989ed70a4e0dc5b28a38734efbaa3a14346" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.254983 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gnnnq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.264685 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-db-sync-config-data\") pod \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.264886 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-combined-ca-bundle\") pod \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.264955 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtsgq\" (UniqueName: \"kubernetes.io/projected/0830b0ce-038c-41ff-bed9-e4efa49ae35f-kube-api-access-jtsgq\") pod \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.265003 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-config-data\") pod \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\" (UID: \"0830b0ce-038c-41ff-bed9-e4efa49ae35f\") " Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.284244 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0830b0ce-038c-41ff-bed9-e4efa49ae35f-kube-api-access-jtsgq" (OuterVolumeSpecName: "kube-api-access-jtsgq") pod "0830b0ce-038c-41ff-bed9-e4efa49ae35f" (UID: "0830b0ce-038c-41ff-bed9-e4efa49ae35f"). InnerVolumeSpecName "kube-api-access-jtsgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.292759 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0830b0ce-038c-41ff-bed9-e4efa49ae35f" (UID: "0830b0ce-038c-41ff-bed9-e4efa49ae35f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.315761 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0830b0ce-038c-41ff-bed9-e4efa49ae35f" (UID: "0830b0ce-038c-41ff-bed9-e4efa49ae35f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.340127 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7cvqt"] Jan 29 15:58:50 crc kubenswrapper[4835]: E0129 15:58:50.340730 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" containerName="glance-db-sync" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.340752 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" containerName="glance-db-sync" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.341017 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" containerName="glance-db-sync" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.341859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.350863 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hcpz7" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.351425 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.351610 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.356885 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7cvqt"] Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.367193 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.367224 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtsgq\" (UniqueName: \"kubernetes.io/projected/0830b0ce-038c-41ff-bed9-e4efa49ae35f-kube-api-access-jtsgq\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.367237 4835 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.409702 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-config-data" (OuterVolumeSpecName: "config-data") pod "0830b0ce-038c-41ff-bed9-e4efa49ae35f" (UID: "0830b0ce-038c-41ff-bed9-e4efa49ae35f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.469101 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.469167 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-config-data\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.469389 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx4qk\" (UniqueName: \"kubernetes.io/projected/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-kube-api-access-wx4qk\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.469524 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-scripts\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.469655 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0830b0ce-038c-41ff-bed9-e4efa49ae35f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.531489 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be50479d-0914-4f6c-a9b7-57dd363a38df" path="/var/lib/kubelet/pods/be50479d-0914-4f6c-a9b7-57dd363a38df/volumes" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.572779 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-config-data\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.572938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx4qk\" (UniqueName: \"kubernetes.io/projected/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-kube-api-access-wx4qk\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.572994 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-scripts\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.573123 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.613234 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.632088 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx4qk\" (UniqueName: \"kubernetes.io/projected/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-kube-api-access-wx4qk\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.634080 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-config-data\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.634282 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-scripts\") pod \"nova-cell0-conductor-db-sync-7cvqt\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.662396 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84c6fbb54f-qgfqq"] Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.664425 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.691130 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c6fbb54f-qgfqq"] Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.748879 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.778602 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-sb\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.778648 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-nb\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.778692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljtxd\" (UniqueName: \"kubernetes.io/projected/48c1d56e-700b-4165-9cf2-473a90148c08-kube-api-access-ljtxd\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.778775 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-swift-storage-0\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.778802 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-svc\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.778850 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-config\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.798573 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.881252 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-sb\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.881294 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-nb\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.881355 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljtxd\" (UniqueName: \"kubernetes.io/projected/48c1d56e-700b-4165-9cf2-473a90148c08-kube-api-access-ljtxd\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.881442 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-swift-storage-0\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.881478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-svc\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.881529 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-config\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.884609 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-config\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.884752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-sb\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.884775 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-svc\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.885309 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-swift-storage-0\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.886317 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-nb\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.909678 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljtxd\" (UniqueName: \"kubernetes.io/projected/48c1d56e-700b-4165-9cf2-473a90148c08-kube-api-access-ljtxd\") pod \"dnsmasq-dns-84c6fbb54f-qgfqq\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.978762 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 15:58:50 crc kubenswrapper[4835]: I0129 15:58:50.989979 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.022948 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.269104 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="cinder-scheduler" containerID="cri-o://fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e" gracePeriod=30 Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.269580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerStarted","Data":"323900448afe12597912167cec80c6055f1cfa296cfa30f13e8c1ae7e85ab7a5"} Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.269737 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="probe" containerID="cri-o://e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6" gracePeriod=30 Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.473172 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7cvqt"] Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.490425 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: W0129 15:58:51.494003 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b5cbfb3_cf93_4a36_ab1d_d64d65e68858.slice/crio-e2f4ce36ae593e510abcdf77c1216121a92542f6066cdff81504e3e2406657e4 WatchSource:0}: Error finding container e2f4ce36ae593e510abcdf77c1216121a92542f6066cdff81504e3e2406657e4: Status 404 returned error can't find the container with id e2f4ce36ae593e510abcdf77c1216121a92542f6066cdff81504e3e2406657e4 Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.598945 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.612725 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.632993 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.633169 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.633346 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h6f4p" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.680636 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.700492 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.711762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22j5x\" (UniqueName: \"kubernetes.io/projected/3d5ce9df-957d-4bff-84a4-b74104f8976a-kube-api-access-22j5x\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.712133 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.712254 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-logs\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.712459 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.712625 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.725594 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.757067 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c6fbb54f-qgfqq"] Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831670 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831738 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831820 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831843 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22j5x\" (UniqueName: \"kubernetes.io/projected/3d5ce9df-957d-4bff-84a4-b74104f8976a-kube-api-access-22j5x\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831895 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831917 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-logs\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.831991 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.832984 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-logs\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.833997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.834208 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.840324 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-scripts\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.840750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-config-data\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.841451 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.855886 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22j5x\" (UniqueName: \"kubernetes.io/projected/3d5ce9df-957d-4bff-84a4-b74104f8976a-kube-api-access-22j5x\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.880931 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.906593 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.908448 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.911995 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:58:51 crc kubenswrapper[4835]: I0129 15:58:51.918507 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.042894 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.042954 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.043029 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.043078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjwss\" (UniqueName: \"kubernetes.io/projected/485db9dd-3094-4b1b-b79e-52df36d04c82-kube-api-access-qjwss\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.043108 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-logs\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.043156 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.043181 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.070056 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144328 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144757 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjwss\" (UniqueName: \"kubernetes.io/projected/485db9dd-3094-4b1b-b79e-52df36d04c82-kube-api-access-qjwss\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144796 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-logs\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144839 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144864 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144944 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.144975 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.148149 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-logs\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.148493 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.149216 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.151802 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.155823 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.159658 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.171973 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjwss\" (UniqueName: \"kubernetes.io/projected/485db9dd-3094-4b1b-b79e-52df36d04c82-kube-api-access-qjwss\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.229750 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.250491 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.289615 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" event={"ID":"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858","Type":"ContainerStarted","Data":"e2f4ce36ae593e510abcdf77c1216121a92542f6066cdff81504e3e2406657e4"} Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.292125 4835 generic.go:334] "Generic (PLEG): container finished" podID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerID="e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6" exitCode=0 Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.292215 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3793fb25-49d3-4193-bf24-02fa97710d0b","Type":"ContainerDied","Data":"e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6"} Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.294114 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerStarted","Data":"cad574b1637439f957e68f49246b39bbce29a258431d631b0bbeebb7357aa84b"} Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.294148 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerStarted","Data":"a8d8ab1ab23f1b95dac089ca2e28dcbde834129ee761da5435b25a8fb918c2a3"} Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.302781 4835 generic.go:334] "Generic (PLEG): container finished" podID="48c1d56e-700b-4165-9cf2-473a90148c08" containerID="58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611" exitCode=0 Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.302827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" event={"ID":"48c1d56e-700b-4165-9cf2-473a90148c08","Type":"ContainerDied","Data":"58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611"} Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.302865 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" event={"ID":"48c1d56e-700b-4165-9cf2-473a90148c08","Type":"ContainerStarted","Data":"62ed10d94853989b4064e73a66f41e9e81ba84cd09d2a74932778e95aa911a8a"} Jan 29 15:58:52 crc kubenswrapper[4835]: E0129 15:58:52.390582 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 15:58:52 crc kubenswrapper[4835]: E0129 15:58:52.397874 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gjks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(90a18ee1-6e48-4cb1-9688-54de76668cca): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:58:52 crc kubenswrapper[4835]: E0129 15:58:52.399220 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" Jan 29 15:58:52 crc kubenswrapper[4835]: I0129 15:58:52.940988 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.042631 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:53 crc kubenswrapper[4835]: W0129 15:58:53.081209 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod485db9dd_3094_4b1b_b79e_52df36d04c82.slice/crio-eb09332c54f521b49b2c5315f43f3a35cbca9a11f46f83f6c9832db53412cafe WatchSource:0}: Error finding container eb09332c54f521b49b2c5315f43f3a35cbca9a11f46f83f6c9832db53412cafe: Status 404 returned error can't find the container with id eb09332c54f521b49b2c5315f43f3a35cbca9a11f46f83f6c9832db53412cafe Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.325813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"485db9dd-3094-4b1b-b79e-52df36d04c82","Type":"ContainerStarted","Data":"eb09332c54f521b49b2c5315f43f3a35cbca9a11f46f83f6c9832db53412cafe"} Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.343755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" event={"ID":"48c1d56e-700b-4165-9cf2-473a90148c08","Type":"ContainerStarted","Data":"c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24"} Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.346047 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.350153 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d5ce9df-957d-4bff-84a4-b74104f8976a","Type":"ContainerStarted","Data":"cbae2e2051eb7560ef8f95e830a9f8b21adc6f75e24b8a62452b3cca75148da6"} Jan 29 15:58:53 crc kubenswrapper[4835]: E0129 15:58:53.366701 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.377992 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" podStartSLOduration=3.377975088 podStartE2EDuration="3.377975088s" podCreationTimestamp="2026-01-29 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:53.371652609 +0000 UTC m=+1835.564696273" watchObservedRunningTime="2026-01-29 15:58:53.377975088 +0000 UTC m=+1835.571018742" Jan 29 15:58:53 crc kubenswrapper[4835]: I0129 15:58:53.493797 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:58:53 crc kubenswrapper[4835]: E0129 15:58:53.493988 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:58:54 crc kubenswrapper[4835]: I0129 15:58:54.370038 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"485db9dd-3094-4b1b-b79e-52df36d04c82","Type":"ContainerStarted","Data":"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea"} Jan 29 15:58:54 crc kubenswrapper[4835]: I0129 15:58:54.374804 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d5ce9df-957d-4bff-84a4-b74104f8976a","Type":"ContainerStarted","Data":"905e2ab744e7db112cefd78ed8ce26c901622eb2fc8fb2e81ee9517e1c35f494"} Jan 29 15:58:54 crc kubenswrapper[4835]: I0129 15:58:54.913896 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.019430 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025331 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data\") pod \"3793fb25-49d3-4193-bf24-02fa97710d0b\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025495 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-scripts\") pod \"3793fb25-49d3-4193-bf24-02fa97710d0b\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025546 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3793fb25-49d3-4193-bf24-02fa97710d0b-etc-machine-id\") pod \"3793fb25-49d3-4193-bf24-02fa97710d0b\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data-custom\") pod \"3793fb25-49d3-4193-bf24-02fa97710d0b\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025678 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-combined-ca-bundle\") pod \"3793fb25-49d3-4193-bf24-02fa97710d0b\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025722 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbtqq\" (UniqueName: \"kubernetes.io/projected/3793fb25-49d3-4193-bf24-02fa97710d0b-kube-api-access-zbtqq\") pod \"3793fb25-49d3-4193-bf24-02fa97710d0b\" (UID: \"3793fb25-49d3-4193-bf24-02fa97710d0b\") " Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.025801 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3793fb25-49d3-4193-bf24-02fa97710d0b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3793fb25-49d3-4193-bf24-02fa97710d0b" (UID: "3793fb25-49d3-4193-bf24-02fa97710d0b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.026208 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3793fb25-49d3-4193-bf24-02fa97710d0b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.031934 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3793fb25-49d3-4193-bf24-02fa97710d0b" (UID: "3793fb25-49d3-4193-bf24-02fa97710d0b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.032372 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-scripts" (OuterVolumeSpecName: "scripts") pod "3793fb25-49d3-4193-bf24-02fa97710d0b" (UID: "3793fb25-49d3-4193-bf24-02fa97710d0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.034435 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3793fb25-49d3-4193-bf24-02fa97710d0b-kube-api-access-zbtqq" (OuterVolumeSpecName: "kube-api-access-zbtqq") pod "3793fb25-49d3-4193-bf24-02fa97710d0b" (UID: "3793fb25-49d3-4193-bf24-02fa97710d0b"). InnerVolumeSpecName "kube-api-access-zbtqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.090970 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.128651 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.129965 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.129981 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbtqq\" (UniqueName: \"kubernetes.io/projected/3793fb25-49d3-4193-bf24-02fa97710d0b-kube-api-access-zbtqq\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.154460 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3793fb25-49d3-4193-bf24-02fa97710d0b" (UID: "3793fb25-49d3-4193-bf24-02fa97710d0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.226641 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data" (OuterVolumeSpecName: "config-data") pod "3793fb25-49d3-4193-bf24-02fa97710d0b" (UID: "3793fb25-49d3-4193-bf24-02fa97710d0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.232740 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.232777 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3793fb25-49d3-4193-bf24-02fa97710d0b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.386100 4835 generic.go:334] "Generic (PLEG): container finished" podID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerID="fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e" exitCode=0 Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.386174 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3793fb25-49d3-4193-bf24-02fa97710d0b","Type":"ContainerDied","Data":"fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e"} Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.386203 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3793fb25-49d3-4193-bf24-02fa97710d0b","Type":"ContainerDied","Data":"fca7fe62adaf5a1eff708721ff8a4f2e5c464075dc2a4cb14c99ef8be678ce3a"} Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.386220 4835 scope.go:117] "RemoveContainer" containerID="e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.386364 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.405552 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"485db9dd-3094-4b1b-b79e-52df36d04c82","Type":"ContainerStarted","Data":"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd"} Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.405749 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-log" containerID="cri-o://faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea" gracePeriod=30 Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.406343 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-httpd" containerID="cri-o://3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd" gracePeriod=30 Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.416629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d5ce9df-957d-4bff-84a4-b74104f8976a","Type":"ContainerStarted","Data":"a0c7f886a3bc6e9e63699991c93c0b30b91bc00da7e37f6bd55bbdce92d2102d"} Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.416801 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-log" containerID="cri-o://905e2ab744e7db112cefd78ed8ce26c901622eb2fc8fb2e81ee9517e1c35f494" gracePeriod=30 Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.416999 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-httpd" containerID="cri-o://a0c7f886a3bc6e9e63699991c93c0b30b91bc00da7e37f6bd55bbdce92d2102d" gracePeriod=30 Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.448089 4835 scope.go:117] "RemoveContainer" containerID="fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.455123 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.45509841 podStartE2EDuration="5.45509841s" podCreationTimestamp="2026-01-29 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:55.43789873 +0000 UTC m=+1837.630942384" watchObservedRunningTime="2026-01-29 15:58:55.45509841 +0000 UTC m=+1837.648142074" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.490064 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.501930 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.534647 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.534626322 podStartE2EDuration="5.534626322s" podCreationTimestamp="2026-01-29 15:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:55.488923247 +0000 UTC m=+1837.681966901" watchObservedRunningTime="2026-01-29 15:58:55.534626322 +0000 UTC m=+1837.727669976" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.535942 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:55 crc kubenswrapper[4835]: E0129 15:58:55.536435 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="probe" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.536457 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="probe" Jan 29 15:58:55 crc kubenswrapper[4835]: E0129 15:58:55.536505 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="cinder-scheduler" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.536517 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="cinder-scheduler" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.536819 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="cinder-scheduler" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.536844 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" containerName="probe" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.538009 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.544786 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.577102 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.653850 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-scripts\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.653910 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lssh5\" (UniqueName: \"kubernetes.io/projected/a845528c-90ab-4f61-ac5e-95f275035efa-kube-api-access-lssh5\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.653946 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.653973 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.654013 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.654061 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a845528c-90ab-4f61-ac5e-95f275035efa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.731932 4835 scope.go:117] "RemoveContainer" containerID="e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6" Jan 29 15:58:55 crc kubenswrapper[4835]: E0129 15:58:55.745121 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6\": container with ID starting with e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6 not found: ID does not exist" containerID="e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.745177 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6"} err="failed to get container status \"e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6\": rpc error: code = NotFound desc = could not find container \"e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6\": container with ID starting with e051c907fda2f3edc2b2084153040263548fa64b4a92a39a95d46ec4d26584a6 not found: ID does not exist" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.745211 4835 scope.go:117] "RemoveContainer" containerID="fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e" Jan 29 15:58:55 crc kubenswrapper[4835]: E0129 15:58:55.745705 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e\": container with ID starting with fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e not found: ID does not exist" containerID="fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.745741 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e"} err="failed to get container status \"fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e\": rpc error: code = NotFound desc = could not find container \"fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e\": container with ID starting with fd3ee68cabb23246d66b4f944eea958c77561d377eaf7fc5019202d928c07a1e not found: ID does not exist" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.756047 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lssh5\" (UniqueName: \"kubernetes.io/projected/a845528c-90ab-4f61-ac5e-95f275035efa-kube-api-access-lssh5\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.756124 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.756150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.756191 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.756251 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a845528c-90ab-4f61-ac5e-95f275035efa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.756371 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-scripts\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.757244 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a845528c-90ab-4f61-ac5e-95f275035efa-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.768963 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-scripts\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.781845 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.788022 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.788402 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.790959 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.808752 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lssh5\" (UniqueName: \"kubernetes.io/projected/a845528c-90ab-4f61-ac5e-95f275035efa-kube-api-access-lssh5\") pod \"cinder-scheduler-0\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " pod="openstack/cinder-scheduler-0" Jan 29 15:58:55 crc kubenswrapper[4835]: I0129 15:58:55.880758 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.039641 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.074298 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhf9x"] Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.167444 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.268558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-combined-ca-bundle\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.269024 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-scripts\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.269090 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.269199 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-httpd-run\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.269220 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-config-data\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.269280 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjwss\" (UniqueName: \"kubernetes.io/projected/485db9dd-3094-4b1b-b79e-52df36d04c82-kube-api-access-qjwss\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.269312 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-logs\") pod \"485db9dd-3094-4b1b-b79e-52df36d04c82\" (UID: \"485db9dd-3094-4b1b-b79e-52df36d04c82\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.274023 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-logs" (OuterVolumeSpecName: "logs") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.278335 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.278742 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.288636 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/485db9dd-3094-4b1b-b79e-52df36d04c82-kube-api-access-qjwss" (OuterVolumeSpecName: "kube-api-access-qjwss") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "kube-api-access-qjwss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.298611 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-scripts" (OuterVolumeSpecName: "scripts") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.312108 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.377560 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.377622 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.377638 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjwss\" (UniqueName: \"kubernetes.io/projected/485db9dd-3094-4b1b-b79e-52df36d04c82-kube-api-access-qjwss\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.377656 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/485db9dd-3094-4b1b-b79e-52df36d04c82-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.377669 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.377680 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.389788 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-config-data" (OuterVolumeSpecName: "config-data") pod "485db9dd-3094-4b1b-b79e-52df36d04c82" (UID: "485db9dd-3094-4b1b-b79e-52df36d04c82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.404969 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.447816 4835 generic.go:334] "Generic (PLEG): container finished" podID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerID="3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd" exitCode=143 Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.447860 4835 generic.go:334] "Generic (PLEG): container finished" podID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerID="faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea" exitCode=143 Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.447994 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"485db9dd-3094-4b1b-b79e-52df36d04c82","Type":"ContainerDied","Data":"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd"} Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.448024 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.448035 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"485db9dd-3094-4b1b-b79e-52df36d04c82","Type":"ContainerDied","Data":"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea"} Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.448047 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"485db9dd-3094-4b1b-b79e-52df36d04c82","Type":"ContainerDied","Data":"eb09332c54f521b49b2c5315f43f3a35cbca9a11f46f83f6c9832db53412cafe"} Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.448074 4835 scope.go:117] "RemoveContainer" containerID="3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.457897 4835 generic.go:334] "Generic (PLEG): container finished" podID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerID="a0c7f886a3bc6e9e63699991c93c0b30b91bc00da7e37f6bd55bbdce92d2102d" exitCode=0 Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.457931 4835 generic.go:334] "Generic (PLEG): container finished" podID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerID="905e2ab744e7db112cefd78ed8ce26c901622eb2fc8fb2e81ee9517e1c35f494" exitCode=143 Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.459049 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d5ce9df-957d-4bff-84a4-b74104f8976a","Type":"ContainerDied","Data":"a0c7f886a3bc6e9e63699991c93c0b30b91bc00da7e37f6bd55bbdce92d2102d"} Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.459084 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d5ce9df-957d-4bff-84a4-b74104f8976a","Type":"ContainerDied","Data":"905e2ab744e7db112cefd78ed8ce26c901622eb2fc8fb2e81ee9517e1c35f494"} Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.476524 4835 scope.go:117] "RemoveContainer" containerID="faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.480166 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.480504 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/485db9dd-3094-4b1b-b79e-52df36d04c82-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.529452 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3793fb25-49d3-4193-bf24-02fa97710d0b" path="/var/lib/kubelet/pods/3793fb25-49d3-4193-bf24-02fa97710d0b/volumes" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.530208 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.530252 4835 scope.go:117] "RemoveContainer" containerID="3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531177 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:56 crc kubenswrapper[4835]: E0129 15:58:56.531220 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd\": container with ID starting with 3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd not found: ID does not exist" containerID="3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531268 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd"} err="failed to get container status \"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd\": rpc error: code = NotFound desc = could not find container \"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd\": container with ID starting with 3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd not found: ID does not exist" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531299 4835 scope.go:117] "RemoveContainer" containerID="faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea" Jan 29 15:58:56 crc kubenswrapper[4835]: E0129 15:58:56.531722 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea\": container with ID starting with faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea not found: ID does not exist" containerID="faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531739 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea"} err="failed to get container status \"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea\": rpc error: code = NotFound desc = could not find container \"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea\": container with ID starting with faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea not found: ID does not exist" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531762 4835 scope.go:117] "RemoveContainer" containerID="3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531970 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd"} err="failed to get container status \"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd\": rpc error: code = NotFound desc = could not find container \"3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd\": container with ID starting with 3c71160afa2cbd49770029c0b6e68012f5ad6cffc8d72e0f121c02592805cacd not found: ID does not exist" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.531986 4835 scope.go:117] "RemoveContainer" containerID="faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.532139 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea"} err="failed to get container status \"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea\": rpc error: code = NotFound desc = could not find container \"faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea\": container with ID starting with faf26dde9564f5ef8e38a6e4f99f8b9a7042ad77889a8f3b803228c8652ecdea not found: ID does not exist" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.544025 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:56 crc kubenswrapper[4835]: E0129 15:58:56.544535 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-httpd" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.544555 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-httpd" Jan 29 15:58:56 crc kubenswrapper[4835]: E0129 15:58:56.544605 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-log" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.544613 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-log" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.544890 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-httpd" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.544920 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" containerName="glance-log" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.546207 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.556570 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.558741 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.558775 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582416 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-logs\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582454 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582503 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582581 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582615 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582630 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgrm\" (UniqueName: \"kubernetes.io/projected/b533af07-f6d8-480e-b749-018fbcc90954-kube-api-access-4xgrm\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582649 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.582689 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707124 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707172 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xgrm\" (UniqueName: \"kubernetes.io/projected/b533af07-f6d8-480e-b749-018fbcc90954-kube-api-access-4xgrm\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707202 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707248 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707272 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-logs\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707290 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707321 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707394 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.707879 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.708055 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.708926 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-logs\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.714590 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.714768 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.723505 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.723976 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.733605 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xgrm\" (UniqueName: \"kubernetes.io/projected/b533af07-f6d8-480e-b749-018fbcc90954-kube-api-access-4xgrm\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.752050 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.761220 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.766218 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.896254 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.913682 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-config-data\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.913752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-scripts\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.913896 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-httpd-run\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.914448 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.914603 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-logs\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.914633 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.914665 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-combined-ca-bundle\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.914692 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22j5x\" (UniqueName: \"kubernetes.io/projected/3d5ce9df-957d-4bff-84a4-b74104f8976a-kube-api-access-22j5x\") pod \"3d5ce9df-957d-4bff-84a4-b74104f8976a\" (UID: \"3d5ce9df-957d-4bff-84a4-b74104f8976a\") " Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.914910 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-logs" (OuterVolumeSpecName: "logs") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.915529 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.915548 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d5ce9df-957d-4bff-84a4-b74104f8976a-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.925706 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-scripts" (OuterVolumeSpecName: "scripts") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.925862 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5ce9df-957d-4bff-84a4-b74104f8976a-kube-api-access-22j5x" (OuterVolumeSpecName: "kube-api-access-22j5x") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "kube-api-access-22j5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.925886 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.944988 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:56 crc kubenswrapper[4835]: I0129 15:58:56.976123 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-config-data" (OuterVolumeSpecName: "config-data") pod "3d5ce9df-957d-4bff-84a4-b74104f8976a" (UID: "3d5ce9df-957d-4bff-84a4-b74104f8976a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.019258 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.019292 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.019306 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22j5x\" (UniqueName: \"kubernetes.io/projected/3d5ce9df-957d-4bff-84a4-b74104f8976a-kube-api-access-22j5x\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.019316 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.019325 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d5ce9df-957d-4bff-84a4-b74104f8976a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.053691 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.120842 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.470680 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3d5ce9df-957d-4bff-84a4-b74104f8976a","Type":"ContainerDied","Data":"cbae2e2051eb7560ef8f95e830a9f8b21adc6f75e24b8a62452b3cca75148da6"} Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.470704 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.471059 4835 scope.go:117] "RemoveContainer" containerID="a0c7f886a3bc6e9e63699991c93c0b30b91bc00da7e37f6bd55bbdce92d2102d" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.476804 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a845528c-90ab-4f61-ac5e-95f275035efa","Type":"ContainerStarted","Data":"f4e109ec1081fd2dff07ecc8d5fb3b233bbb0a66b55e7e8dbe3d25f609fd04c2"} Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.479253 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zhf9x" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" containerID="cri-o://cca6cb344c450ca84d89393f81ad8e25dbaa55092d4440fc873f5c140125b10d" gracePeriod=2 Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.527234 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.543133 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.545124 4835 scope.go:117] "RemoveContainer" containerID="905e2ab744e7db112cefd78ed8ce26c901622eb2fc8fb2e81ee9517e1c35f494" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.562504 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.574560 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:57 crc kubenswrapper[4835]: E0129 15:58:57.575002 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-log" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.575017 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-log" Jan 29 15:58:57 crc kubenswrapper[4835]: E0129 15:58:57.575035 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-httpd" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.575041 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-httpd" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.575208 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-httpd" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.575225 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" containerName="glance-log" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.576320 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.593220 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.597598 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.600373 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.733664 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.733985 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.734028 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.734060 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.734078 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjvd\" (UniqueName: \"kubernetes.io/projected/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-kube-api-access-zdjvd\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.734135 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.734169 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.734215 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.837714 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.837824 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.837903 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.837926 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.837966 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.837992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.838010 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjvd\" (UniqueName: \"kubernetes.io/projected/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-kube-api-access-zdjvd\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.838058 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.838174 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.838558 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.838938 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.843289 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.846038 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.849306 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.850132 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.856595 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjvd\" (UniqueName: \"kubernetes.io/projected/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-kube-api-access-zdjvd\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:57 crc kubenswrapper[4835]: I0129 15:58:57.896187 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " pod="openstack/glance-default-external-api-0" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.138277 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.542978 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5ce9df-957d-4bff-84a4-b74104f8976a" path="/var/lib/kubelet/pods/3d5ce9df-957d-4bff-84a4-b74104f8976a/volumes" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.545345 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="485db9dd-3094-4b1b-b79e-52df36d04c82" path="/var/lib/kubelet/pods/485db9dd-3094-4b1b-b79e-52df36d04c82/volumes" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.621383 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a845528c-90ab-4f61-ac5e-95f275035efa","Type":"ContainerStarted","Data":"a28a5cec9281d7421cb36e3f7d37a107045c2b59a77ddfcbfcbee35f116de902"} Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.627793 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b533af07-f6d8-480e-b749-018fbcc90954","Type":"ContainerStarted","Data":"1e248ee2087d7f092975d7a456f76648e8185787d354e0adee80b4e293a5b132"} Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.632034 4835 generic.go:334] "Generic (PLEG): container finished" podID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerID="cca6cb344c450ca84d89393f81ad8e25dbaa55092d4440fc873f5c140125b10d" exitCode=0 Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.632362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerDied","Data":"cca6cb344c450ca84d89393f81ad8e25dbaa55092d4440fc873f5c140125b10d"} Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.776019 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.836972 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.873456 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzhcv\" (UniqueName: \"kubernetes.io/projected/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-kube-api-access-pzhcv\") pod \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.873560 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-utilities\") pod \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.873623 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-catalog-content\") pod \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\" (UID: \"9a1ed6cf-5cdd-4ccc-841b-56840df89d98\") " Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.875178 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-utilities" (OuterVolumeSpecName: "utilities") pod "9a1ed6cf-5cdd-4ccc-841b-56840df89d98" (UID: "9a1ed6cf-5cdd-4ccc-841b-56840df89d98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.883083 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-kube-api-access-pzhcv" (OuterVolumeSpecName: "kube-api-access-pzhcv") pod "9a1ed6cf-5cdd-4ccc-841b-56840df89d98" (UID: "9a1ed6cf-5cdd-4ccc-841b-56840df89d98"). InnerVolumeSpecName "kube-api-access-pzhcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.946931 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a1ed6cf-5cdd-4ccc-841b-56840df89d98" (UID: "9a1ed6cf-5cdd-4ccc-841b-56840df89d98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.975964 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzhcv\" (UniqueName: \"kubernetes.io/projected/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-kube-api-access-pzhcv\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.975995 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:58 crc kubenswrapper[4835]: I0129 15:58:58.976004 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a1ed6cf-5cdd-4ccc-841b-56840df89d98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.653751 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b533af07-f6d8-480e-b749-018fbcc90954","Type":"ContainerStarted","Data":"c94af0a7ac5f465740b07c825d24bcc25558743b04fe206dbb892888c984fad1"} Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.654016 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b533af07-f6d8-480e-b749-018fbcc90954","Type":"ContainerStarted","Data":"6e7b075215065ce406f64ecf4e2094a6c00cba77cf2a70533e517b0504ce4ec9"} Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.656957 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhf9x" event={"ID":"9a1ed6cf-5cdd-4ccc-841b-56840df89d98","Type":"ContainerDied","Data":"08bea58f294112030bf34464868e8a0483c2dfe3df616bf5730ce102d0d2638b"} Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.656996 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhf9x" Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.657018 4835 scope.go:117] "RemoveContainer" containerID="cca6cb344c450ca84d89393f81ad8e25dbaa55092d4440fc873f5c140125b10d" Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.663362 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf","Type":"ContainerStarted","Data":"a082287693dcfeaff31725125ccc4d3a066df9df54d2d6fd4338eaa474793de0"} Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.663400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf","Type":"ContainerStarted","Data":"d96c67680a5ba997cd5541aa7e630e32782392e02f8e9e6b4a1da019235962ee"} Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.675557 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a845528c-90ab-4f61-ac5e-95f275035efa","Type":"ContainerStarted","Data":"570734336d9d7b25a8857e635d8a9706c455de491fa31af1b769d6d6503b8994"} Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.684950 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.684925459 podStartE2EDuration="3.684925459s" podCreationTimestamp="2026-01-29 15:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:59.675244587 +0000 UTC m=+1841.868288241" watchObservedRunningTime="2026-01-29 15:58:59.684925459 +0000 UTC m=+1841.877969113" Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.715562 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.715544556 podStartE2EDuration="4.715544556s" podCreationTimestamp="2026-01-29 15:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:58:59.705627117 +0000 UTC m=+1841.898670771" watchObservedRunningTime="2026-01-29 15:58:59.715544556 +0000 UTC m=+1841.908588210" Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.812898 4835 scope.go:117] "RemoveContainer" containerID="f621a7ee980abdadb9510785914c9b599b5e327689471982d67e9a158e626384" Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.816132 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhf9x"] Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.832267 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zhf9x"] Jan 29 15:58:59 crc kubenswrapper[4835]: I0129 15:58:59.872822 4835 scope.go:117] "RemoveContainer" containerID="31a33426b9ef395dd17ef2e1de93164e2013fc9d0b9cd505ba4cfad5594f8cea" Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.239314 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.239753 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-central-agent" containerID="cri-o://323900448afe12597912167cec80c6055f1cfa296cfa30f13e8c1ae7e85ab7a5" gracePeriod=30 Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.240116 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="sg-core" containerID="cri-o://cad574b1637439f957e68f49246b39bbce29a258431d631b0bbeebb7357aa84b" gracePeriod=30 Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.240159 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-notification-agent" containerID="cri-o://a8d8ab1ab23f1b95dac089ca2e28dcbde834129ee761da5435b25a8fb918c2a3" gracePeriod=30 Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.513211 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" path="/var/lib/kubelet/pods/9a1ed6cf-5cdd-4ccc-841b-56840df89d98/volumes" Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.699151 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerDied","Data":"cad574b1637439f957e68f49246b39bbce29a258431d631b0bbeebb7357aa84b"} Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.699163 4835 generic.go:334] "Generic (PLEG): container finished" podID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerID="cad574b1637439f957e68f49246b39bbce29a258431d631b0bbeebb7357aa84b" exitCode=2 Jan 29 15:59:00 crc kubenswrapper[4835]: I0129 15:59:00.992628 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.041199 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.076924 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bb4997-f74rm"] Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.077174 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76bb4997-f74rm" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="dnsmasq-dns" containerID="cri-o://6106f7585ea7f844099a776fe785fa3fb3409db8617c0c3ccf9f99bfa090ad01" gracePeriod=10 Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.535112 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.717511 4835 generic.go:334] "Generic (PLEG): container finished" podID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerID="a8d8ab1ab23f1b95dac089ca2e28dcbde834129ee761da5435b25a8fb918c2a3" exitCode=0 Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.717586 4835 generic.go:334] "Generic (PLEG): container finished" podID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerID="323900448afe12597912167cec80c6055f1cfa296cfa30f13e8c1ae7e85ab7a5" exitCode=0 Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.717719 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerDied","Data":"a8d8ab1ab23f1b95dac089ca2e28dcbde834129ee761da5435b25a8fb918c2a3"} Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.717806 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerDied","Data":"323900448afe12597912167cec80c6055f1cfa296cfa30f13e8c1ae7e85ab7a5"} Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.720261 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf","Type":"ContainerStarted","Data":"e1cc3b707a48c3592522f8b5901223aed79fe5fe3e964f3494ec107ef4eeefd7"} Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.722985 4835 generic.go:334] "Generic (PLEG): container finished" podID="fcced40f-58fc-451b-b87a-4d607febc581" containerID="6106f7585ea7f844099a776fe785fa3fb3409db8617c0c3ccf9f99bfa090ad01" exitCode=0 Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.723081 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb4997-f74rm" event={"ID":"fcced40f-58fc-451b-b87a-4d607febc581","Type":"ContainerDied","Data":"6106f7585ea7f844099a776fe785fa3fb3409db8617c0c3ccf9f99bfa090ad01"} Jan 29 15:59:01 crc kubenswrapper[4835]: I0129 15:59:01.753491 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.753454217 podStartE2EDuration="4.753454217s" podCreationTimestamp="2026-01-29 15:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:59:01.74239823 +0000 UTC m=+1843.935441884" watchObservedRunningTime="2026-01-29 15:59:01.753454217 +0000 UTC m=+1843.946497871" Jan 29 15:59:02 crc kubenswrapper[4835]: I0129 15:59:02.731914 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-log" containerID="cri-o://a082287693dcfeaff31725125ccc4d3a066df9df54d2d6fd4338eaa474793de0" gracePeriod=30 Jan 29 15:59:02 crc kubenswrapper[4835]: I0129 15:59:02.733147 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-httpd" containerID="cri-o://e1cc3b707a48c3592522f8b5901223aed79fe5fe3e964f3494ec107ef4eeefd7" gracePeriod=30 Jan 29 15:59:03 crc kubenswrapper[4835]: I0129 15:59:03.752364 4835 generic.go:334] "Generic (PLEG): container finished" podID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerID="e1cc3b707a48c3592522f8b5901223aed79fe5fe3e964f3494ec107ef4eeefd7" exitCode=0 Jan 29 15:59:03 crc kubenswrapper[4835]: I0129 15:59:03.752404 4835 generic.go:334] "Generic (PLEG): container finished" podID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerID="a082287693dcfeaff31725125ccc4d3a066df9df54d2d6fd4338eaa474793de0" exitCode=143 Jan 29 15:59:03 crc kubenswrapper[4835]: I0129 15:59:03.752434 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf","Type":"ContainerDied","Data":"e1cc3b707a48c3592522f8b5901223aed79fe5fe3e964f3494ec107ef4eeefd7"} Jan 29 15:59:03 crc kubenswrapper[4835]: I0129 15:59:03.752484 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf","Type":"ContainerDied","Data":"a082287693dcfeaff31725125ccc4d3a066df9df54d2d6fd4338eaa474793de0"} Jan 29 15:59:04 crc kubenswrapper[4835]: I0129 15:59:04.788148 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:59:04 crc kubenswrapper[4835]: I0129 15:59:04.788857 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-log" containerID="cri-o://6e7b075215065ce406f64ecf4e2094a6c00cba77cf2a70533e517b0504ce4ec9" gracePeriod=30 Jan 29 15:59:04 crc kubenswrapper[4835]: I0129 15:59:04.789041 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-httpd" containerID="cri-o://c94af0a7ac5f465740b07c825d24bcc25558743b04fe206dbb892888c984fad1" gracePeriod=30 Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.186734 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76bb4997-f74rm" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.492414 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:59:05 crc kubenswrapper[4835]: E0129 15:59:05.492795 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.615082 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrwpz"] Jan 29 15:59:05 crc kubenswrapper[4835]: E0129 15:59:05.615438 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.615453 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" Jan 29 15:59:05 crc kubenswrapper[4835]: E0129 15:59:05.615488 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="extract-utilities" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.615495 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="extract-utilities" Jan 29 15:59:05 crc kubenswrapper[4835]: E0129 15:59:05.615517 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="extract-content" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.615524 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="extract-content" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.615691 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1ed6cf-5cdd-4ccc-841b-56840df89d98" containerName="registry-server" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.616859 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.634725 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrwpz"] Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.716791 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-utilities\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.717171 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-catalog-content\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.717359 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws6hc\" (UniqueName: \"kubernetes.io/projected/9068c986-c447-405a-8620-069aa4d48469-kube-api-access-ws6hc\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.819460 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-catalog-content\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.819548 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws6hc\" (UniqueName: \"kubernetes.io/projected/9068c986-c447-405a-8620-069aa4d48469-kube-api-access-ws6hc\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.819629 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-utilities\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.820127 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-catalog-content\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.820135 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-utilities\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.850492 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws6hc\" (UniqueName: \"kubernetes.io/projected/9068c986-c447-405a-8620-069aa4d48469-kube-api-access-ws6hc\") pod \"redhat-marketplace-jrwpz\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:05 crc kubenswrapper[4835]: I0129 15:59:05.940817 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 15:59:06 crc kubenswrapper[4835]: I0129 15:59:06.271282 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 15:59:06 crc kubenswrapper[4835]: I0129 15:59:06.795243 4835 generic.go:334] "Generic (PLEG): container finished" podID="b533af07-f6d8-480e-b749-018fbcc90954" containerID="6e7b075215065ce406f64ecf4e2094a6c00cba77cf2a70533e517b0504ce4ec9" exitCode=143 Jan 29 15:59:06 crc kubenswrapper[4835]: I0129 15:59:06.795294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b533af07-f6d8-480e-b749-018fbcc90954","Type":"ContainerDied","Data":"6e7b075215065ce406f64ecf4e2094a6c00cba77cf2a70533e517b0504ce4ec9"} Jan 29 15:59:08 crc kubenswrapper[4835]: I0129 15:59:08.818442 4835 generic.go:334] "Generic (PLEG): container finished" podID="b533af07-f6d8-480e-b749-018fbcc90954" containerID="c94af0a7ac5f465740b07c825d24bcc25558743b04fe206dbb892888c984fad1" exitCode=0 Jan 29 15:59:08 crc kubenswrapper[4835]: I0129 15:59:08.818573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b533af07-f6d8-480e-b749-018fbcc90954","Type":"ContainerDied","Data":"c94af0a7ac5f465740b07c825d24bcc25558743b04fe206dbb892888c984fad1"} Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.240341 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.302987 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hxbt\" (UniqueName: \"kubernetes.io/projected/fcced40f-58fc-451b-b87a-4d607febc581-kube-api-access-2hxbt\") pod \"fcced40f-58fc-451b-b87a-4d607febc581\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.303128 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-svc\") pod \"fcced40f-58fc-451b-b87a-4d607febc581\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.303212 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-sb\") pod \"fcced40f-58fc-451b-b87a-4d607febc581\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.303326 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-nb\") pod \"fcced40f-58fc-451b-b87a-4d607febc581\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.303363 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-config\") pod \"fcced40f-58fc-451b-b87a-4d607febc581\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.303400 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-swift-storage-0\") pod \"fcced40f-58fc-451b-b87a-4d607febc581\" (UID: \"fcced40f-58fc-451b-b87a-4d607febc581\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.319938 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcced40f-58fc-451b-b87a-4d607febc581-kube-api-access-2hxbt" (OuterVolumeSpecName: "kube-api-access-2hxbt") pod "fcced40f-58fc-451b-b87a-4d607febc581" (UID: "fcced40f-58fc-451b-b87a-4d607febc581"). InnerVolumeSpecName "kube-api-access-2hxbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.371930 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fcced40f-58fc-451b-b87a-4d607febc581" (UID: "fcced40f-58fc-451b-b87a-4d607febc581"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.375843 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-config" (OuterVolumeSpecName: "config") pod "fcced40f-58fc-451b-b87a-4d607febc581" (UID: "fcced40f-58fc-451b-b87a-4d607febc581"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.383817 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fcced40f-58fc-451b-b87a-4d607febc581" (UID: "fcced40f-58fc-451b-b87a-4d607febc581"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.383993 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fcced40f-58fc-451b-b87a-4d607febc581" (UID: "fcced40f-58fc-451b-b87a-4d607febc581"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.396308 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcced40f-58fc-451b-b87a-4d607febc581" (UID: "fcced40f-58fc-451b-b87a-4d607febc581"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.405448 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-config\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.405504 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.405517 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hxbt\" (UniqueName: \"kubernetes.io/projected/fcced40f-58fc-451b-b87a-4d607febc581-kube-api-access-2hxbt\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.405526 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.405534 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.405543 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fcced40f-58fc-451b-b87a-4d607febc581-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.411913 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrwpz"] Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.449694 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506229 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-httpd-run\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-config-data\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506334 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-scripts\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506394 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-combined-ca-bundle\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506663 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-logs\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506726 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xgrm\" (UniqueName: \"kubernetes.io/projected/b533af07-f6d8-480e-b749-018fbcc90954-kube-api-access-4xgrm\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.506795 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-internal-tls-certs\") pod \"b533af07-f6d8-480e-b749-018fbcc90954\" (UID: \"b533af07-f6d8-480e-b749-018fbcc90954\") " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.507091 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.507994 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.508345 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-logs" (OuterVolumeSpecName: "logs") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.511555 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.522619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-scripts" (OuterVolumeSpecName: "scripts") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.522685 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b533af07-f6d8-480e-b749-018fbcc90954-kube-api-access-4xgrm" (OuterVolumeSpecName: "kube-api-access-4xgrm") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "kube-api-access-4xgrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.541855 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.585638 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.594186 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-config-data" (OuterVolumeSpecName: "config-data") pod "b533af07-f6d8-480e-b749-018fbcc90954" (UID: "b533af07-f6d8-480e-b749-018fbcc90954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609731 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b533af07-f6d8-480e-b749-018fbcc90954-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609760 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xgrm\" (UniqueName: \"kubernetes.io/projected/b533af07-f6d8-480e-b749-018fbcc90954-kube-api-access-4xgrm\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609771 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609780 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609789 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609797 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533af07-f6d8-480e-b749-018fbcc90954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.609820 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.644821 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.711906 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.851982 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90a18ee1-6e48-4cb1-9688-54de76668cca","Type":"ContainerDied","Data":"ff45a3952b2301782ffecaf6a08731db105eb4428e9f93d74358493d259c7ca1"} Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.852368 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff45a3952b2301782ffecaf6a08731db105eb4428e9f93d74358493d259c7ca1" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.854069 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b533af07-f6d8-480e-b749-018fbcc90954","Type":"ContainerDied","Data":"1e248ee2087d7f092975d7a456f76648e8185787d354e0adee80b4e293a5b132"} Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.854105 4835 scope.go:117] "RemoveContainer" containerID="c94af0a7ac5f465740b07c825d24bcc25558743b04fe206dbb892888c984fad1" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.854141 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.855230 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrwpz" event={"ID":"9068c986-c447-405a-8620-069aa4d48469","Type":"ContainerStarted","Data":"f648e47796506dc4efa9bcfac7e985f72bf1cb9c7dee7f715912ed9b800a5efa"} Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.856524 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76bb4997-f74rm" event={"ID":"fcced40f-58fc-451b-b87a-4d607febc581","Type":"ContainerDied","Data":"682945b05db2dbd414f981c861dcc8cc41a2447345c1bb13575ef7328ee30f3d"} Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.856604 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76bb4997-f74rm" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.945978 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.969423 4835 scope.go:117] "RemoveContainer" containerID="6e7b075215065ce406f64ecf4e2094a6c00cba77cf2a70533e517b0504ce4ec9" Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.973650 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:59:10 crc kubenswrapper[4835]: I0129 15:59:10.990411 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.005538 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76bb4997-f74rm"] Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.012810 4835 scope.go:117] "RemoveContainer" containerID="6106f7585ea7f844099a776fe785fa3fb3409db8617c0c3ccf9f99bfa090ad01" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-combined-ca-bundle\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016186 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gjks\" (UniqueName: \"kubernetes.io/projected/90a18ee1-6e48-4cb1-9688-54de76668cca-kube-api-access-8gjks\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016215 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-scripts\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016260 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-sg-core-conf-yaml\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016339 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-log-httpd\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016429 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-config-data\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016455 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-run-httpd\") pod \"90a18ee1-6e48-4cb1-9688-54de76668cca\" (UID: \"90a18ee1-6e48-4cb1-9688-54de76668cca\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.016929 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.017133 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.021650 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-scripts" (OuterVolumeSpecName: "scripts") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.021704 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76bb4997-f74rm"] Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.043189 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a18ee1-6e48-4cb1-9688-54de76668cca-kube-api-access-8gjks" (OuterVolumeSpecName: "kube-api-access-8gjks") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "kube-api-access-8gjks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.050734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.062539 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063113 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-httpd" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063140 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-httpd" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063175 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="init" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063185 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="init" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063198 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-log" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063207 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-log" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063221 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-notification-agent" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063229 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-notification-agent" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063252 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="sg-core" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063260 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="sg-core" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063282 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-central-agent" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063290 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-central-agent" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.063318 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="dnsmasq-dns" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063326 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="dnsmasq-dns" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063566 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-central-agent" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063906 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="ceilometer-notification-agent" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.063989 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-log" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.064518 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" containerName="sg-core" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.064547 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b533af07-f6d8-480e-b749-018fbcc90954" containerName="glance-httpd" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.064615 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="dnsmasq-dns" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.075172 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.078967 4835 scope.go:117] "RemoveContainer" containerID="a178bbdb8daa123c4ccc54844248519caf6837bf8d072d1d1c023fe089ae4ea6" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.082078 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.084445 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.089607 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-config-data" (OuterVolumeSpecName: "config-data") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.092541 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90a18ee1-6e48-4cb1-9688-54de76668cca" (UID: "90a18ee1-6e48-4cb1-9688-54de76668cca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.111803 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.120141 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.120498 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.120692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.120844 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121035 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121159 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121302 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121429 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58r4n\" (UniqueName: \"kubernetes.io/projected/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-kube-api-access-58r4n\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121632 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gjks\" (UniqueName: \"kubernetes.io/projected/90a18ee1-6e48-4cb1-9688-54de76668cca-kube-api-access-8gjks\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121728 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121792 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121855 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121918 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.121980 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90a18ee1-6e48-4cb1-9688-54de76668cca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.122039 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90a18ee1-6e48-4cb1-9688-54de76668cca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.223784 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.223842 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.223891 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.223923 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.223979 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.223998 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.224025 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.224043 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58r4n\" (UniqueName: \"kubernetes.io/projected/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-kube-api-access-58r4n\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.224339 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.224367 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.224631 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-logs\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.228710 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.229956 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.235208 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.236856 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.243576 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58r4n\" (UniqueName: \"kubernetes.io/projected/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-kube-api-access-58r4n\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.258181 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.443349 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.496509 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.496666 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-7cvqt_openstack(3b5cbfb3-cf93-4a36-ab1d-d64d65e68858): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.497904 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" podUID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.794331 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836004 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-combined-ca-bundle\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836159 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-logs\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836292 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-scripts\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836324 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-httpd-run\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836383 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836423 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836478 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjvd\" (UniqueName: \"kubernetes.io/projected/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-kube-api-access-zdjvd\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.836539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-config-data\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.837005 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.837389 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-logs" (OuterVolumeSpecName: "logs") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.842662 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-kube-api-access-zdjvd" (OuterVolumeSpecName: "kube-api-access-zdjvd") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "kube-api-access-zdjvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.843136 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-scripts" (OuterVolumeSpecName: "scripts") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.846080 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.870002 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.871969 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf","Type":"ContainerDied","Data":"d96c67680a5ba997cd5541aa7e630e32782392e02f8e9e6b4a1da019235962ee"} Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.872010 4835 scope.go:117] "RemoveContainer" containerID="e1cc3b707a48c3592522f8b5901223aed79fe5fe3e964f3494ec107ef4eeefd7" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.872019 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.874300 4835 generic.go:334] "Generic (PLEG): container finished" podID="9068c986-c447-405a-8620-069aa4d48469" containerID="dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f" exitCode=0 Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.874589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrwpz" event={"ID":"9068c986-c447-405a-8620-069aa4d48469","Type":"ContainerDied","Data":"dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f"} Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.878009 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:11 crc kubenswrapper[4835]: E0129 15:59:11.881650 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:22f097cb86b28ac48dc670ed7e0e841280bef1608f11b2b4536fbc2d2a6a90be\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" podUID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.939979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.941847 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs\") pod \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\" (UID: \"c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf\") " Jan 29 15:59:11 crc kubenswrapper[4835]: W0129 15:59:11.942174 4835 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf/volumes/kubernetes.io~secret/public-tls-certs Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.942204 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945165 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945200 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-logs\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945219 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945231 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945257 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945279 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.945290 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjvd\" (UniqueName: \"kubernetes.io/projected/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-kube-api-access-zdjvd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.947110 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-config-data" (OuterVolumeSpecName: "config-data") pod "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" (UID: "c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:11 crc kubenswrapper[4835]: I0129 15:59:11.974334 4835 scope.go:117] "RemoveContainer" containerID="a082287693dcfeaff31725125ccc4d3a066df9df54d2d6fd4338eaa474793de0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.009770 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.021381 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 15:59:12 crc kubenswrapper[4835]: E0129 15:59:12.024578 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:59:12 crc kubenswrapper[4835]: E0129 15:59:12.024740 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws6hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jrwpz_openshift-marketplace(9068c986-c447-405a-8620-069aa4d48469): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:12 crc kubenswrapper[4835]: E0129 15:59:12.027662 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.028703 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.041302 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: E0129 15:59:12.041815 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-log" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.041836 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-log" Jan 29 15:59:12 crc kubenswrapper[4835]: E0129 15:59:12.041856 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-httpd" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.041862 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-httpd" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.042035 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-httpd" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.042048 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" containerName="glance-log" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.043981 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.047014 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.047053 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.053260 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.054188 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.055675 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.064695 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.148781 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.148854 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-run-httpd\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.148890 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qs4\" (UniqueName: \"kubernetes.io/projected/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-kube-api-access-z4qs4\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.148914 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.148946 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-log-httpd\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.149075 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-scripts\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.149190 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-config-data\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.216120 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.240955 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252152 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-scripts\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252251 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-config-data\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252324 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252364 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-run-httpd\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252387 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4qs4\" (UniqueName: \"kubernetes.io/projected/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-kube-api-access-z4qs4\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252414 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.252440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-log-httpd\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.253171 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-log-httpd\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.256148 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-run-httpd\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.260237 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.261519 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.265399 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-config-data\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.267891 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.269871 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.273053 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.275562 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-scripts\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.276129 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4qs4\" (UniqueName: \"kubernetes.io/projected/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-kube-api-access-z4qs4\") pod \"ceilometer-0\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.276280 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.279972 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.354282 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.354355 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.354546 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.355292 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.355375 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.355436 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-logs\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.355540 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlmjp\" (UniqueName: \"kubernetes.io/projected/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-kube-api-access-wlmjp\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.355597 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.371804 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457292 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-logs\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457374 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlmjp\" (UniqueName: \"kubernetes.io/projected/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-kube-api-access-wlmjp\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457422 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457507 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457558 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457595 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.457679 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.458586 4835 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.458722 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.458945 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-logs\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.463598 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.464086 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-config-data\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.467269 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.469695 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-scripts\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.479066 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlmjp\" (UniqueName: \"kubernetes.io/projected/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-kube-api-access-wlmjp\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.496643 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.506026 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a18ee1-6e48-4cb1-9688-54de76668cca" path="/var/lib/kubelet/pods/90a18ee1-6e48-4cb1-9688-54de76668cca/volumes" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.506994 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b533af07-f6d8-480e-b749-018fbcc90954" path="/var/lib/kubelet/pods/b533af07-f6d8-480e-b749-018fbcc90954/volumes" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.508231 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf" path="/var/lib/kubelet/pods/c6e7c196-6d52-4e49-bdfe-f6c2c89ccecf/volumes" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.508964 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcced40f-58fc-451b-b87a-4d607febc581" path="/var/lib/kubelet/pods/fcced40f-58fc-451b-b87a-4d607febc581/volumes" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.597505 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.826518 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:12 crc kubenswrapper[4835]: W0129 15:59:12.827293 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebb9e08f_194a_4bb3_b9b7_c0ee43f563c6.slice/crio-741e7c2a9a3b878ced2fabe1e4575c8d96f5c7ed4a94e774616d4dd725251dde WatchSource:0}: Error finding container 741e7c2a9a3b878ced2fabe1e4575c8d96f5c7ed4a94e774616d4dd725251dde: Status 404 returned error can't find the container with id 741e7c2a9a3b878ced2fabe1e4575c8d96f5c7ed4a94e774616d4dd725251dde Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.887909 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerStarted","Data":"741e7c2a9a3b878ced2fabe1e4575c8d96f5c7ed4a94e774616d4dd725251dde"} Jan 29 15:59:12 crc kubenswrapper[4835]: I0129 15:59:12.889988 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2","Type":"ContainerStarted","Data":"bc4ca779c8ab9be9e31dcfbc8c301cc4c6547397410df943c3fbd9a3ecb733ac"} Jan 29 15:59:12 crc kubenswrapper[4835]: E0129 15:59:12.894126 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 15:59:13 crc kubenswrapper[4835]: I0129 15:59:13.093295 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 15:59:13 crc kubenswrapper[4835]: W0129 15:59:13.094745 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0429dcb_ecbc_4cdb_8e05_7a8f8f3a4141.slice/crio-3b5369c948091099a56fc09f3cf2ee575eb9964c818567bc0207e00ae3fb2c48 WatchSource:0}: Error finding container 3b5369c948091099a56fc09f3cf2ee575eb9964c818567bc0207e00ae3fb2c48: Status 404 returned error can't find the container with id 3b5369c948091099a56fc09f3cf2ee575eb9964c818567bc0207e00ae3fb2c48 Jan 29 15:59:13 crc kubenswrapper[4835]: I0129 15:59:13.901251 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141","Type":"ContainerStarted","Data":"34a15120c83268cc3829764d188bfd5b463d82edd23441f010b64a781fa805fd"} Jan 29 15:59:13 crc kubenswrapper[4835]: I0129 15:59:13.901634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141","Type":"ContainerStarted","Data":"3b5369c948091099a56fc09f3cf2ee575eb9964c818567bc0207e00ae3fb2c48"} Jan 29 15:59:13 crc kubenswrapper[4835]: I0129 15:59:13.907552 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2","Type":"ContainerStarted","Data":"4bddd694b3a71f4c157e46ee4469e5cde7015e8f59a7762ccb3f61c557f57358"} Jan 29 15:59:14 crc kubenswrapper[4835]: I0129 15:59:14.919226 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2","Type":"ContainerStarted","Data":"da115e8709897068c943f4fb977b6a69f1f71d0f9c94591378a80a37d0e541cb"} Jan 29 15:59:14 crc kubenswrapper[4835]: I0129 15:59:14.950559 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.950537132 podStartE2EDuration="4.950537132s" podCreationTimestamp="2026-01-29 15:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:59:14.94487343 +0000 UTC m=+1857.137917084" watchObservedRunningTime="2026-01-29 15:59:14.950537132 +0000 UTC m=+1857.143580786" Jan 29 15:59:15 crc kubenswrapper[4835]: I0129 15:59:15.187301 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76bb4997-f74rm" podUID="fcced40f-58fc-451b-b87a-4d607febc581" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Jan 29 15:59:15 crc kubenswrapper[4835]: I0129 15:59:15.932614 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141","Type":"ContainerStarted","Data":"024c1b9e43847d2606d0aaf1470ab7b043f475eb17ce51f35fa0cf4e402923e1"} Jan 29 15:59:15 crc kubenswrapper[4835]: I0129 15:59:15.937082 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerStarted","Data":"b7c3a6f248cc10844b4a82f340816ecc29e9dbbc1ba3d95f6ca0198bc2060772"} Jan 29 15:59:15 crc kubenswrapper[4835]: I0129 15:59:15.961307 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.961289276 podStartE2EDuration="3.961289276s" podCreationTimestamp="2026-01-29 15:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 15:59:15.958846215 +0000 UTC m=+1858.151889879" watchObservedRunningTime="2026-01-29 15:59:15.961289276 +0000 UTC m=+1858.154332930" Jan 29 15:59:18 crc kubenswrapper[4835]: I0129 15:59:18.500100 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:59:18 crc kubenswrapper[4835]: E0129 15:59:18.500713 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:59:19 crc kubenswrapper[4835]: I0129 15:59:19.976709 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerStarted","Data":"829edd3eaac3cda072da89c586d4741a25321735c6e777e71fbd084cf47dbc31"} Jan 29 15:59:20 crc kubenswrapper[4835]: E0129 15:59:20.682125 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 15:59:20 crc kubenswrapper[4835]: E0129 15:59:20.682628 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4qs4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:20 crc kubenswrapper[4835]: E0129 15:59:20.683858 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" Jan 29 15:59:20 crc kubenswrapper[4835]: I0129 15:59:20.987660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerStarted","Data":"46336d9efd9408b58cac7352d53bd1d6ea40255bd86b5b6949cd3845c7dd711f"} Jan 29 15:59:20 crc kubenswrapper[4835]: E0129 15:59:20.989580 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" Jan 29 15:59:21 crc kubenswrapper[4835]: I0129 15:59:21.444085 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:21 crc kubenswrapper[4835]: I0129 15:59:21.445101 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:21 crc kubenswrapper[4835]: I0129 15:59:21.479882 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:21 crc kubenswrapper[4835]: I0129 15:59:21.485808 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:21 crc kubenswrapper[4835]: I0129 15:59:21.998276 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:21 crc kubenswrapper[4835]: I0129 15:59:21.998309 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:21 crc kubenswrapper[4835]: E0129 15:59:21.999349 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" Jan 29 15:59:22 crc kubenswrapper[4835]: I0129 15:59:22.598316 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:59:22 crc kubenswrapper[4835]: I0129 15:59:22.598369 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 15:59:22 crc kubenswrapper[4835]: I0129 15:59:22.627573 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:59:22 crc kubenswrapper[4835]: I0129 15:59:22.640354 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 15:59:23 crc kubenswrapper[4835]: I0129 15:59:23.006707 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:59:23 crc kubenswrapper[4835]: I0129 15:59:23.006765 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 15:59:23 crc kubenswrapper[4835]: I0129 15:59:23.522816 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:23 crc kubenswrapper[4835]: I0129 15:59:23.523504 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-central-agent" containerID="cri-o://b7c3a6f248cc10844b4a82f340816ecc29e9dbbc1ba3d95f6ca0198bc2060772" gracePeriod=30 Jan 29 15:59:23 crc kubenswrapper[4835]: I0129 15:59:23.523880 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="sg-core" containerID="cri-o://46336d9efd9408b58cac7352d53bd1d6ea40255bd86b5b6949cd3845c7dd711f" gracePeriod=30 Jan 29 15:59:23 crc kubenswrapper[4835]: I0129 15:59:23.523937 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-notification-agent" containerID="cri-o://829edd3eaac3cda072da89c586d4741a25321735c6e777e71fbd084cf47dbc31" gracePeriod=30 Jan 29 15:59:24 crc kubenswrapper[4835]: I0129 15:59:24.032093 4835 generic.go:334] "Generic (PLEG): container finished" podID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerID="46336d9efd9408b58cac7352d53bd1d6ea40255bd86b5b6949cd3845c7dd711f" exitCode=2 Jan 29 15:59:24 crc kubenswrapper[4835]: I0129 15:59:24.032193 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerDied","Data":"46336d9efd9408b58cac7352d53bd1d6ea40255bd86b5b6949cd3845c7dd711f"} Jan 29 15:59:24 crc kubenswrapper[4835]: I0129 15:59:24.032232 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:59:24 crc kubenswrapper[4835]: I0129 15:59:24.032317 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:59:24 crc kubenswrapper[4835]: I0129 15:59:24.048792 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:24 crc kubenswrapper[4835]: I0129 15:59:24.140665 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.047395 4835 generic.go:334] "Generic (PLEG): container finished" podID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerID="829edd3eaac3cda072da89c586d4741a25321735c6e777e71fbd084cf47dbc31" exitCode=0 Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.047717 4835 generic.go:334] "Generic (PLEG): container finished" podID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerID="b7c3a6f248cc10844b4a82f340816ecc29e9dbbc1ba3d95f6ca0198bc2060772" exitCode=0 Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.047477 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerDied","Data":"829edd3eaac3cda072da89c586d4741a25321735c6e777e71fbd084cf47dbc31"} Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.047778 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerDied","Data":"b7c3a6f248cc10844b4a82f340816ecc29e9dbbc1ba3d95f6ca0198bc2060772"} Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.047801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6","Type":"ContainerDied","Data":"741e7c2a9a3b878ced2fabe1e4575c8d96f5c7ed4a94e774616d4dd725251dde"} Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.047812 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741e7c2a9a3b878ced2fabe1e4575c8d96f5c7ed4a94e774616d4dd725251dde" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.115687 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.115793 4835 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.117131 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.132847 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.209678 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-scripts\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.209745 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-combined-ca-bundle\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.209823 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-run-httpd\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.209868 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-sg-core-conf-yaml\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.209979 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4qs4\" (UniqueName: \"kubernetes.io/projected/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-kube-api-access-z4qs4\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.210023 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-config-data\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.210057 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-log-httpd\") pod \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\" (UID: \"ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6\") " Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.210202 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.210479 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.210690 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.210714 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.225689 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-kube-api-access-z4qs4" (OuterVolumeSpecName: "kube-api-access-z4qs4") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "kube-api-access-z4qs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.226635 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-scripts" (OuterVolumeSpecName: "scripts") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.291809 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.321047 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4qs4\" (UniqueName: \"kubernetes.io/projected/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-kube-api-access-z4qs4\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.321081 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.321092 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.328855 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.343660 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-config-data" (OuterVolumeSpecName: "config-data") pod "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" (UID: "ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.423231 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:25 crc kubenswrapper[4835]: I0129 15:59:25.423270 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.055485 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.126655 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.141888 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.156513 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:26 crc kubenswrapper[4835]: E0129 15:59:26.157307 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="sg-core" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.157337 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="sg-core" Jan 29 15:59:26 crc kubenswrapper[4835]: E0129 15:59:26.157367 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-notification-agent" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.157383 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-notification-agent" Jan 29 15:59:26 crc kubenswrapper[4835]: E0129 15:59:26.157413 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-central-agent" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.157423 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-central-agent" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.157732 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-central-agent" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.157766 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="sg-core" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.157786 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" containerName="ceilometer-notification-agent" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.160651 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.163126 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.165976 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.167708 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.237717 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-scripts\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.237805 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-log-httpd\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.237843 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-run-httpd\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.237986 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmpt5\" (UniqueName: \"kubernetes.io/projected/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-kube-api-access-dmpt5\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.238053 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.238095 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.238175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-config-data\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.342173 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-config-data\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.342339 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-scripts\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.342399 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-log-httpd\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.342437 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-run-httpd\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.343007 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmpt5\" (UniqueName: \"kubernetes.io/projected/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-kube-api-access-dmpt5\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.343077 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.343125 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.346819 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-run-httpd\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.346846 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-log-httpd\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.349366 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-config-data\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.350086 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.355262 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-scripts\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.374907 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmpt5\" (UniqueName: \"kubernetes.io/projected/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-kube-api-access-dmpt5\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.375049 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.497662 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.512667 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6" path="/var/lib/kubelet/pods/ebb9e08f-194a-4bb3-b9b7-c0ee43f563c6/volumes" Jan 29 15:59:26 crc kubenswrapper[4835]: E0129 15:59:26.621897 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:59:26 crc kubenswrapper[4835]: E0129 15:59:26.622251 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws6hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jrwpz_openshift-marketplace(9068c986-c447-405a-8620-069aa4d48469): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:26 crc kubenswrapper[4835]: E0129 15:59:26.624366 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 15:59:26 crc kubenswrapper[4835]: I0129 15:59:26.975874 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:26 crc kubenswrapper[4835]: W0129 15:59:26.977007 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cacfa1c_67b3_4cfe_8c0e_6f094603fe70.slice/crio-a5df6ec5ff1638bb460c5c41631decb2221205f00dd1956518f81c0d32dcc56d WatchSource:0}: Error finding container a5df6ec5ff1638bb460c5c41631decb2221205f00dd1956518f81c0d32dcc56d: Status 404 returned error can't find the container with id a5df6ec5ff1638bb460c5c41631decb2221205f00dd1956518f81c0d32dcc56d Jan 29 15:59:27 crc kubenswrapper[4835]: I0129 15:59:27.068695 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerStarted","Data":"a5df6ec5ff1638bb460c5c41631decb2221205f00dd1956518f81c0d32dcc56d"} Jan 29 15:59:31 crc kubenswrapper[4835]: I0129 15:59:31.491742 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:59:31 crc kubenswrapper[4835]: E0129 15:59:31.493438 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:59:39 crc kubenswrapper[4835]: I0129 15:59:39.175549 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerStarted","Data":"84e34b39ef02837084cb5dc0cb69f798df1bb721b9f55075519df1c1994fe68e"} Jan 29 15:59:40 crc kubenswrapper[4835]: I0129 15:59:40.188102 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" event={"ID":"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858","Type":"ContainerStarted","Data":"3b5ac41ff2999bbe5b233b240b25d7210474d04899fa404444643a14fe3a63b8"} Jan 29 15:59:40 crc kubenswrapper[4835]: I0129 15:59:40.218613 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" podStartSLOduration=3.71453709 podStartE2EDuration="50.218596024s" podCreationTimestamp="2026-01-29 15:58:50 +0000 UTC" firstStartedPulling="2026-01-29 15:58:51.508248777 +0000 UTC m=+1833.701292431" lastFinishedPulling="2026-01-29 15:59:38.012307691 +0000 UTC m=+1880.205351365" observedRunningTime="2026-01-29 15:59:40.209885854 +0000 UTC m=+1882.402929508" watchObservedRunningTime="2026-01-29 15:59:40.218596024 +0000 UTC m=+1882.411639678" Jan 29 15:59:40 crc kubenswrapper[4835]: E0129 15:59:40.497699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 15:59:43 crc kubenswrapper[4835]: I0129 15:59:43.230985 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerStarted","Data":"ab1c36e56d2c5ffa90d3bd3452745aae3f6b79441c922d4ad7b1ff4a97761c9f"} Jan 29 15:59:43 crc kubenswrapper[4835]: I0129 15:59:43.466113 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:44 crc kubenswrapper[4835]: E0129 15:59:44.079626 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 15:59:44 crc kubenswrapper[4835]: E0129 15:59:44.079798 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmpt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6cacfa1c-67b3-4cfe-8c0e-6f094603fe70): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:44 crc kubenswrapper[4835]: E0129 15:59:44.081107 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" Jan 29 15:59:44 crc kubenswrapper[4835]: I0129 15:59:44.248988 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerStarted","Data":"79fbcaa7b71f7231409a7911cf374660e7ade9df95153c70ea19a374fad88efa"} Jan 29 15:59:44 crc kubenswrapper[4835]: I0129 15:59:44.249319 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-central-agent" containerID="cri-o://84e34b39ef02837084cb5dc0cb69f798df1bb721b9f55075519df1c1994fe68e" gracePeriod=30 Jan 29 15:59:44 crc kubenswrapper[4835]: I0129 15:59:44.249492 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-notification-agent" containerID="cri-o://ab1c36e56d2c5ffa90d3bd3452745aae3f6b79441c922d4ad7b1ff4a97761c9f" gracePeriod=30 Jan 29 15:59:44 crc kubenswrapper[4835]: I0129 15:59:44.249498 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="sg-core" containerID="cri-o://79fbcaa7b71f7231409a7911cf374660e7ade9df95153c70ea19a374fad88efa" gracePeriod=30 Jan 29 15:59:45 crc kubenswrapper[4835]: I0129 15:59:45.259245 4835 generic.go:334] "Generic (PLEG): container finished" podID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerID="79fbcaa7b71f7231409a7911cf374660e7ade9df95153c70ea19a374fad88efa" exitCode=2 Jan 29 15:59:45 crc kubenswrapper[4835]: I0129 15:59:45.260341 4835 generic.go:334] "Generic (PLEG): container finished" podID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerID="ab1c36e56d2c5ffa90d3bd3452745aae3f6b79441c922d4ad7b1ff4a97761c9f" exitCode=0 Jan 29 15:59:45 crc kubenswrapper[4835]: I0129 15:59:45.259318 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerDied","Data":"79fbcaa7b71f7231409a7911cf374660e7ade9df95153c70ea19a374fad88efa"} Jan 29 15:59:45 crc kubenswrapper[4835]: I0129 15:59:45.260576 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerDied","Data":"ab1c36e56d2c5ffa90d3bd3452745aae3f6b79441c922d4ad7b1ff4a97761c9f"} Jan 29 15:59:46 crc kubenswrapper[4835]: I0129 15:59:46.492026 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 15:59:46 crc kubenswrapper[4835]: E0129 15:59:46.492718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.295153 4835 generic.go:334] "Generic (PLEG): container finished" podID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerID="84e34b39ef02837084cb5dc0cb69f798df1bb721b9f55075519df1c1994fe68e" exitCode=0 Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.295291 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerDied","Data":"84e34b39ef02837084cb5dc0cb69f798df1bb721b9f55075519df1c1994fe68e"} Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.820681 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.921724 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-combined-ca-bundle\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.921798 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-run-httpd\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.921821 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-sg-core-conf-yaml\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.921933 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-scripts\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.922038 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmpt5\" (UniqueName: \"kubernetes.io/projected/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-kube-api-access-dmpt5\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.922087 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-log-httpd\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.922111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-config-data\") pod \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\" (UID: \"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70\") " Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.922280 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.922868 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.923448 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.923510 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.931703 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-kube-api-access-dmpt5" (OuterVolumeSpecName: "kube-api-access-dmpt5") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "kube-api-access-dmpt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.935306 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-scripts" (OuterVolumeSpecName: "scripts") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.952597 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.980319 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-config-data" (OuterVolumeSpecName: "config-data") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:49 crc kubenswrapper[4835]: I0129 15:59:49.987695 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" (UID: "6cacfa1c-67b3-4cfe-8c0e-6f094603fe70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.029449 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmpt5\" (UniqueName: \"kubernetes.io/projected/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-kube-api-access-dmpt5\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.030088 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.030106 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.030120 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.030134 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.306755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6cacfa1c-67b3-4cfe-8c0e-6f094603fe70","Type":"ContainerDied","Data":"a5df6ec5ff1638bb460c5c41631decb2221205f00dd1956518f81c0d32dcc56d"} Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.306800 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.306817 4835 scope.go:117] "RemoveContainer" containerID="79fbcaa7b71f7231409a7911cf374660e7ade9df95153c70ea19a374fad88efa" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.333677 4835 scope.go:117] "RemoveContainer" containerID="ab1c36e56d2c5ffa90d3bd3452745aae3f6b79441c922d4ad7b1ff4a97761c9f" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.388378 4835 scope.go:117] "RemoveContainer" containerID="84e34b39ef02837084cb5dc0cb69f798df1bb721b9f55075519df1c1994fe68e" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.411576 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.423637 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.434529 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:50 crc kubenswrapper[4835]: E0129 15:59:50.435029 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-notification-agent" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.435051 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-notification-agent" Jan 29 15:59:50 crc kubenswrapper[4835]: E0129 15:59:50.435081 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="sg-core" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.435091 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="sg-core" Jan 29 15:59:50 crc kubenswrapper[4835]: E0129 15:59:50.435133 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-central-agent" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.435142 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-central-agent" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.435353 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-central-agent" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.435387 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="ceilometer-notification-agent" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.435400 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" containerName="sg-core" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.437332 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.439392 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.442890 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.448660 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.503798 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cacfa1c-67b3-4cfe-8c0e-6f094603fe70" path="/var/lib/kubelet/pods/6cacfa1c-67b3-4cfe-8c0e-6f094603fe70/volumes" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.546176 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-config-data\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.546433 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-run-httpd\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.546874 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-scripts\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.546938 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-log-httpd\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.547002 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2486\" (UniqueName: \"kubernetes.io/projected/5c38f85e-331d-4c4e-bff4-d22762fd36ff-kube-api-access-z2486\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.547079 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.547097 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.650282 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-scripts\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.650799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-log-httpd\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.651018 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2486\" (UniqueName: \"kubernetes.io/projected/5c38f85e-331d-4c4e-bff4-d22762fd36ff-kube-api-access-z2486\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.651150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.651180 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.651343 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-log-httpd\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.651620 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-config-data\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.651666 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-run-httpd\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.652206 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-run-httpd\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.654990 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.655392 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-scripts\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.656102 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.664831 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-config-data\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.668440 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2486\" (UniqueName: \"kubernetes.io/projected/5c38f85e-331d-4c4e-bff4-d22762fd36ff-kube-api-access-z2486\") pod \"ceilometer-0\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " pod="openstack/ceilometer-0" Jan 29 15:59:50 crc kubenswrapper[4835]: I0129 15:59:50.756520 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 15:59:51 crc kubenswrapper[4835]: W0129 15:59:51.222361 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c38f85e_331d_4c4e_bff4_d22762fd36ff.slice/crio-26fe3f528bc98ec7feb37627a390e460baa362a6e68551e1c401f07b96ce9e25 WatchSource:0}: Error finding container 26fe3f528bc98ec7feb37627a390e460baa362a6e68551e1c401f07b96ce9e25: Status 404 returned error can't find the container with id 26fe3f528bc98ec7feb37627a390e460baa362a6e68551e1c401f07b96ce9e25 Jan 29 15:59:51 crc kubenswrapper[4835]: I0129 15:59:51.226588 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:51 crc kubenswrapper[4835]: I0129 15:59:51.318207 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerStarted","Data":"26fe3f528bc98ec7feb37627a390e460baa362a6e68551e1c401f07b96ce9e25"} Jan 29 15:59:54 crc kubenswrapper[4835]: I0129 15:59:54.341466 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerStarted","Data":"09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4"} Jan 29 15:59:54 crc kubenswrapper[4835]: E0129 15:59:54.623093 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 15:59:54 crc kubenswrapper[4835]: E0129 15:59:54.623241 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ws6hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jrwpz_openshift-marketplace(9068c986-c447-405a-8620-069aa4d48469): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:54 crc kubenswrapper[4835]: E0129 15:59:54.624413 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 15:59:56 crc kubenswrapper[4835]: I0129 15:59:56.366040 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerStarted","Data":"75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b"} Jan 29 15:59:57 crc kubenswrapper[4835]: I0129 15:59:57.374981 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerStarted","Data":"12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc"} Jan 29 15:59:57 crc kubenswrapper[4835]: E0129 15:59:57.559053 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 15:59:57 crc kubenswrapper[4835]: E0129 15:59:57.559286 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2486,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(5c38f85e-331d-4c4e-bff4-d22762fd36ff): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 15:59:57 crc kubenswrapper[4835]: E0129 15:59:57.561305 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" Jan 29 15:59:58 crc kubenswrapper[4835]: E0129 15:59:58.386361 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" Jan 29 15:59:59 crc kubenswrapper[4835]: I0129 15:59:59.572227 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 15:59:59 crc kubenswrapper[4835]: I0129 15:59:59.572750 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-central-agent" containerID="cri-o://09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4" gracePeriod=30 Jan 29 15:59:59 crc kubenswrapper[4835]: I0129 15:59:59.572889 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-notification-agent" containerID="cri-o://75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b" gracePeriod=30 Jan 29 15:59:59 crc kubenswrapper[4835]: I0129 15:59:59.572914 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="sg-core" containerID="cri-o://12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc" gracePeriod=30 Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.145626 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9"] Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.146846 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.149212 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.150880 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.158886 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9"] Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.243676 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65f7af16-6a0b-43c9-b9bd-c248a3b59459-secret-volume\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.243765 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65f7af16-6a0b-43c9-b9bd-c248a3b59459-config-volume\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.243940 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmz72\" (UniqueName: \"kubernetes.io/projected/65f7af16-6a0b-43c9-b9bd-c248a3b59459-kube-api-access-xmz72\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.346451 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65f7af16-6a0b-43c9-b9bd-c248a3b59459-secret-volume\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.346575 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65f7af16-6a0b-43c9-b9bd-c248a3b59459-config-volume\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.346628 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmz72\" (UniqueName: \"kubernetes.io/projected/65f7af16-6a0b-43c9-b9bd-c248a3b59459-kube-api-access-xmz72\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.348318 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65f7af16-6a0b-43c9-b9bd-c248a3b59459-config-volume\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.352822 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65f7af16-6a0b-43c9-b9bd-c248a3b59459-secret-volume\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.365962 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmz72\" (UniqueName: \"kubernetes.io/projected/65f7af16-6a0b-43c9-b9bd-c248a3b59459-kube-api-access-xmz72\") pod \"collect-profiles-29495040-5d8t9\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.404942 4835 generic.go:334] "Generic (PLEG): container finished" podID="be114b44-2cf1-4af2-93ae-05b8e56c83c9" containerID="a4a8d1d6795a57017295b4d7231ffac10f0d8b6e3e695493171f62c381faf40f" exitCode=0 Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.405021 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6wzhv" event={"ID":"be114b44-2cf1-4af2-93ae-05b8e56c83c9","Type":"ContainerDied","Data":"a4a8d1d6795a57017295b4d7231ffac10f0d8b6e3e695493171f62c381faf40f"} Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.409745 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerID="12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc" exitCode=2 Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.409778 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerID="75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b" exitCode=0 Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.409775 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerDied","Data":"12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc"} Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.409816 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerDied","Data":"75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b"} Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.468802 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.492035 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 16:00:00 crc kubenswrapper[4835]: W0129 16:00:00.944264 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65f7af16_6a0b_43c9_b9bd_c248a3b59459.slice/crio-0fdabf45e6e0f66addc65e462faf366bda2811cc3bdef52cfd96a53e228dc37a WatchSource:0}: Error finding container 0fdabf45e6e0f66addc65e462faf366bda2811cc3bdef52cfd96a53e228dc37a: Status 404 returned error can't find the container with id 0fdabf45e6e0f66addc65e462faf366bda2811cc3bdef52cfd96a53e228dc37a Jan 29 16:00:00 crc kubenswrapper[4835]: I0129 16:00:00.947691 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9"] Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.421634 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" event={"ID":"65f7af16-6a0b-43c9-b9bd-c248a3b59459","Type":"ContainerStarted","Data":"e2cd34f33f4e2098c24cf42cdca9ec12bbe68f37c063301fe894a458f58b18ca"} Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.422028 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" event={"ID":"65f7af16-6a0b-43c9-b9bd-c248a3b59459","Type":"ContainerStarted","Data":"0fdabf45e6e0f66addc65e462faf366bda2811cc3bdef52cfd96a53e228dc37a"} Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.427971 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"cefdc8553596927103547d09a558f61d60b8753a9e84fa95f7229469a0811e60"} Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.439877 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" podStartSLOduration=1.439860378 podStartE2EDuration="1.439860378s" podCreationTimestamp="2026-01-29 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:01.438569056 +0000 UTC m=+1903.631612710" watchObservedRunningTime="2026-01-29 16:00:01.439860378 +0000 UTC m=+1903.632904032" Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.840960 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6wzhv" Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.983625 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-combined-ca-bundle\") pod \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.983716 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wcw\" (UniqueName: \"kubernetes.io/projected/be114b44-2cf1-4af2-93ae-05b8e56c83c9-kube-api-access-f9wcw\") pod \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.983849 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-config\") pod \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\" (UID: \"be114b44-2cf1-4af2-93ae-05b8e56c83c9\") " Jan 29 16:00:01 crc kubenswrapper[4835]: I0129 16:00:01.990155 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be114b44-2cf1-4af2-93ae-05b8e56c83c9-kube-api-access-f9wcw" (OuterVolumeSpecName: "kube-api-access-f9wcw") pod "be114b44-2cf1-4af2-93ae-05b8e56c83c9" (UID: "be114b44-2cf1-4af2-93ae-05b8e56c83c9"). InnerVolumeSpecName "kube-api-access-f9wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.014982 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be114b44-2cf1-4af2-93ae-05b8e56c83c9" (UID: "be114b44-2cf1-4af2-93ae-05b8e56c83c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.019701 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-config" (OuterVolumeSpecName: "config") pod "be114b44-2cf1-4af2-93ae-05b8e56c83c9" (UID: "be114b44-2cf1-4af2-93ae-05b8e56c83c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.086432 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wcw\" (UniqueName: \"kubernetes.io/projected/be114b44-2cf1-4af2-93ae-05b8e56c83c9-kube-api-access-f9wcw\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.086470 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.086564 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be114b44-2cf1-4af2-93ae-05b8e56c83c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.438626 4835 generic.go:334] "Generic (PLEG): container finished" podID="65f7af16-6a0b-43c9-b9bd-c248a3b59459" containerID="e2cd34f33f4e2098c24cf42cdca9ec12bbe68f37c063301fe894a458f58b18ca" exitCode=0 Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.438796 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" event={"ID":"65f7af16-6a0b-43c9-b9bd-c248a3b59459","Type":"ContainerDied","Data":"e2cd34f33f4e2098c24cf42cdca9ec12bbe68f37c063301fe894a458f58b18ca"} Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.442138 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6wzhv" event={"ID":"be114b44-2cf1-4af2-93ae-05b8e56c83c9","Type":"ContainerDied","Data":"9b3f9c08fe5b6e9d850e9c2a0afdeabe2a3f13446d6141cb2c9ae7b640ac7fbf"} Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.442196 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b3f9c08fe5b6e9d850e9c2a0afdeabe2a3f13446d6141cb2c9ae7b640ac7fbf" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.442214 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6wzhv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.586186 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b4f5fc4f-p8jsz"] Jan 29 16:00:02 crc kubenswrapper[4835]: E0129 16:00:02.586821 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be114b44-2cf1-4af2-93ae-05b8e56c83c9" containerName="neutron-db-sync" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.586841 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="be114b44-2cf1-4af2-93ae-05b8e56c83c9" containerName="neutron-db-sync" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.587269 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="be114b44-2cf1-4af2-93ae-05b8e56c83c9" containerName="neutron-db-sync" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.593010 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.603200 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4f5fc4f-p8jsz"] Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.702919 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.702980 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.703286 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.703350 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-svc\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.703409 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-config\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.703447 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtvp\" (UniqueName: \"kubernetes.io/projected/4d6ac59c-3290-44c5-a7fd-7f878afe393b-kube-api-access-7qtvp\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.765802 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b9cd9cfb8-b9rfv"] Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.776171 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.783607 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.785743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-8nwk9" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.786063 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.790954 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.807093 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.807526 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-svc\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.807605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-config\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.807644 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qtvp\" (UniqueName: \"kubernetes.io/projected/4d6ac59c-3290-44c5-a7fd-7f878afe393b-kube-api-access-7qtvp\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.807724 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.807765 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.808639 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-svc\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.809017 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-nb\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.809208 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-sb\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.809841 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-swift-storage-0\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.814211 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b9cd9cfb8-b9rfv"] Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.815872 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-config\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.866028 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qtvp\" (UniqueName: \"kubernetes.io/projected/4d6ac59c-3290-44c5-a7fd-7f878afe393b-kube-api-access-7qtvp\") pod \"dnsmasq-dns-6b4f5fc4f-p8jsz\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.909374 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw78d\" (UniqueName: \"kubernetes.io/projected/a0965293-b9ac-4787-9e49-bea41b374da2-kube-api-access-jw78d\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.909551 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-ovndb-tls-certs\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.909654 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-httpd-config\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.909883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-config\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.909913 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-combined-ca-bundle\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:02 crc kubenswrapper[4835]: I0129 16:00:02.921924 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.013556 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-config\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.013599 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-combined-ca-bundle\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.013638 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw78d\" (UniqueName: \"kubernetes.io/projected/a0965293-b9ac-4787-9e49-bea41b374da2-kube-api-access-jw78d\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.013723 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-ovndb-tls-certs\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.013751 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-httpd-config\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.021112 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-combined-ca-bundle\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.022459 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-httpd-config\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.031670 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-config\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.036319 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-ovndb-tls-certs\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.037037 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw78d\" (UniqueName: \"kubernetes.io/projected/a0965293-b9ac-4787-9e49-bea41b374da2-kube-api-access-jw78d\") pod \"neutron-b9cd9cfb8-b9rfv\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.109019 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.506271 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b4f5fc4f-p8jsz"] Jan 29 16:00:03 crc kubenswrapper[4835]: I0129 16:00:03.838555 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b9cd9cfb8-b9rfv"] Jan 29 16:00:03 crc kubenswrapper[4835]: W0129 16:00:03.845227 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0965293_b9ac_4787_9e49_bea41b374da2.slice/crio-f74f39d6f24e8d10e257c798d27e6721067abba5577b35abc2e93f3912e49a9a WatchSource:0}: Error finding container f74f39d6f24e8d10e257c798d27e6721067abba5577b35abc2e93f3912e49a9a: Status 404 returned error can't find the container with id f74f39d6f24e8d10e257c798d27e6721067abba5577b35abc2e93f3912e49a9a Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.136539 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.164115 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243563 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-config-data\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243678 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2486\" (UniqueName: \"kubernetes.io/projected/5c38f85e-331d-4c4e-bff4-d22762fd36ff-kube-api-access-z2486\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243752 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-combined-ca-bundle\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243796 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmz72\" (UniqueName: \"kubernetes.io/projected/65f7af16-6a0b-43c9-b9bd-c248a3b59459-kube-api-access-xmz72\") pod \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243834 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-sg-core-conf-yaml\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243872 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-scripts\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243901 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65f7af16-6a0b-43c9-b9bd-c248a3b59459-secret-volume\") pod \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.243963 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-run-httpd\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.244030 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65f7af16-6a0b-43c9-b9bd-c248a3b59459-config-volume\") pod \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\" (UID: \"65f7af16-6a0b-43c9-b9bd-c248a3b59459\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.244060 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-log-httpd\") pod \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\" (UID: \"5c38f85e-331d-4c4e-bff4-d22762fd36ff\") " Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.244820 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.245018 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.245504 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65f7af16-6a0b-43c9-b9bd-c248a3b59459-config-volume" (OuterVolumeSpecName: "config-volume") pod "65f7af16-6a0b-43c9-b9bd-c248a3b59459" (UID: "65f7af16-6a0b-43c9-b9bd-c248a3b59459"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.249626 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-scripts" (OuterVolumeSpecName: "scripts") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.249625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f7af16-6a0b-43c9-b9bd-c248a3b59459-kube-api-access-xmz72" (OuterVolumeSpecName: "kube-api-access-xmz72") pod "65f7af16-6a0b-43c9-b9bd-c248a3b59459" (UID: "65f7af16-6a0b-43c9-b9bd-c248a3b59459"). InnerVolumeSpecName "kube-api-access-xmz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.250625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c38f85e-331d-4c4e-bff4-d22762fd36ff-kube-api-access-z2486" (OuterVolumeSpecName: "kube-api-access-z2486") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "kube-api-access-z2486". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.258105 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65f7af16-6a0b-43c9-b9bd-c248a3b59459-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65f7af16-6a0b-43c9-b9bd-c248a3b59459" (UID: "65f7af16-6a0b-43c9-b9bd-c248a3b59459"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.275764 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.295208 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-config-data" (OuterVolumeSpecName: "config-data") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.295729 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c38f85e-331d-4c4e-bff4-d22762fd36ff" (UID: "5c38f85e-331d-4c4e-bff4-d22762fd36ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345839 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345870 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmz72\" (UniqueName: \"kubernetes.io/projected/65f7af16-6a0b-43c9-b9bd-c248a3b59459-kube-api-access-xmz72\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345880 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345888 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345897 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65f7af16-6a0b-43c9-b9bd-c248a3b59459-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345905 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345915 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65f7af16-6a0b-43c9-b9bd-c248a3b59459-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345922 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c38f85e-331d-4c4e-bff4-d22762fd36ff-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345930 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c38f85e-331d-4c4e-bff4-d22762fd36ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.345938 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2486\" (UniqueName: \"kubernetes.io/projected/5c38f85e-331d-4c4e-bff4-d22762fd36ff-kube-api-access-z2486\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.485662 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9cd9cfb8-b9rfv" event={"ID":"a0965293-b9ac-4787-9e49-bea41b374da2","Type":"ContainerStarted","Data":"f74f39d6f24e8d10e257c798d27e6721067abba5577b35abc2e93f3912e49a9a"} Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.488795 4835 generic.go:334] "Generic (PLEG): container finished" podID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerID="16c6865f3a5dfa7e557676d74d372f2b574d3c8667259bc047a23c592eb39f97" exitCode=0 Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.488835 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" event={"ID":"4d6ac59c-3290-44c5-a7fd-7f878afe393b","Type":"ContainerDied","Data":"16c6865f3a5dfa7e557676d74d372f2b574d3c8667259bc047a23c592eb39f97"} Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.489316 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" event={"ID":"4d6ac59c-3290-44c5-a7fd-7f878afe393b","Type":"ContainerStarted","Data":"4951f518eb6c138c3cdd63c743f9ecbc0b4313e322eb16a5b5c75e76a78c01d2"} Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.495465 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerID="09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4" exitCode=0 Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.495705 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.499366 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.505827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerDied","Data":"09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4"} Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.505882 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c38f85e-331d-4c4e-bff4-d22762fd36ff","Type":"ContainerDied","Data":"26fe3f528bc98ec7feb37627a390e460baa362a6e68551e1c401f07b96ce9e25"} Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.505896 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495040-5d8t9" event={"ID":"65f7af16-6a0b-43c9-b9bd-c248a3b59459","Type":"ContainerDied","Data":"0fdabf45e6e0f66addc65e462faf366bda2811cc3bdef52cfd96a53e228dc37a"} Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.505910 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fdabf45e6e0f66addc65e462faf366bda2811cc3bdef52cfd96a53e228dc37a" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.505932 4835 scope.go:117] "RemoveContainer" containerID="12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.541072 4835 scope.go:117] "RemoveContainer" containerID="75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.578895 4835 scope.go:117] "RemoveContainer" containerID="09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.592764 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.604109 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.624358 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.625003 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-central-agent" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625027 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-central-agent" Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.625050 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="sg-core" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625062 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="sg-core" Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.625095 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f7af16-6a0b-43c9-b9bd-c248a3b59459" containerName="collect-profiles" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625102 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f7af16-6a0b-43c9-b9bd-c248a3b59459" containerName="collect-profiles" Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.625121 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-notification-agent" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625129 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-notification-agent" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625407 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-notification-agent" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625420 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f7af16-6a0b-43c9-b9bd-c248a3b59459" containerName="collect-profiles" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625430 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="ceilometer-central-agent" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.625460 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" containerName="sg-core" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.646537 4835 scope.go:117] "RemoveContainer" containerID="12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc" Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.647646 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc\": container with ID starting with 12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc not found: ID does not exist" containerID="12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.647709 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc"} err="failed to get container status \"12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc\": rpc error: code = NotFound desc = could not find container \"12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc\": container with ID starting with 12c0319e88ef500739d41b66f17ba0b9158d206f017fa47f85faaa31c9acdcfc not found: ID does not exist" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.647739 4835 scope.go:117] "RemoveContainer" containerID="75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b" Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.654750 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b\": container with ID starting with 75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b not found: ID does not exist" containerID="75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.654797 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b"} err="failed to get container status \"75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b\": rpc error: code = NotFound desc = could not find container \"75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b\": container with ID starting with 75d3960d2294ca9af27c732073df223726db54a21ea9169f361cf54ce653d11b not found: ID does not exist" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.654832 4835 scope.go:117] "RemoveContainer" containerID="09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4" Jan 29 16:00:04 crc kubenswrapper[4835]: E0129 16:00:04.655797 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4\": container with ID starting with 09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4 not found: ID does not exist" containerID="09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.656191 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4"} err="failed to get container status \"09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4\": rpc error: code = NotFound desc = could not find container \"09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4\": container with ID starting with 09a82a82a1884371c5d9b2281ba206964b2cc43b2e3643f4af8f245c40c7efc4 not found: ID does not exist" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.656993 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.664718 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.665181 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.676404 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.754893 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-scripts\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.755048 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.755124 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-config-data\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.755147 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2g8\" (UniqueName: \"kubernetes.io/projected/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-kube-api-access-cl2g8\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.755175 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.755213 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-log-httpd\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.755240 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-run-httpd\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857291 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-scripts\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857411 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857500 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-config-data\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857526 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2g8\" (UniqueName: \"kubernetes.io/projected/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-kube-api-access-cl2g8\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857560 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857600 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-log-httpd\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.857629 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-run-httpd\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.858690 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-run-httpd\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.859247 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-log-httpd\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.866342 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-scripts\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.868184 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-config-data\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.868968 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.870614 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.879087 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2g8\" (UniqueName: \"kubernetes.io/projected/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-kube-api-access-cl2g8\") pod \"ceilometer-0\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " pod="openstack/ceilometer-0" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.921796 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-646f7c7f9f-p9x92"] Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.924125 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.931262 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.931568 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.947091 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-646f7c7f9f-p9x92"] Jan 29 16:00:04 crc kubenswrapper[4835]: I0129 16:00:04.988875 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.062076 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-ovndb-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.062351 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-combined-ca-bundle\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.062653 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-public-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.062754 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-config\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.063020 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmwg\" (UniqueName: \"kubernetes.io/projected/fda576f5-4653-4cad-82de-c734c6c7c867-kube-api-access-kfmwg\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.063124 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-internal-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.063386 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-httpd-config\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165686 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-public-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165762 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-config\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165834 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmwg\" (UniqueName: \"kubernetes.io/projected/fda576f5-4653-4cad-82de-c734c6c7c867-kube-api-access-kfmwg\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165866 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-internal-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165906 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-httpd-config\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165941 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-ovndb-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.165988 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-combined-ca-bundle\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.173111 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-combined-ca-bundle\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.174227 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-internal-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.175025 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-config\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.176572 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-ovndb-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.183793 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-httpd-config\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.191540 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-public-tls-certs\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.191891 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmwg\" (UniqueName: \"kubernetes.io/projected/fda576f5-4653-4cad-82de-c734c6c7c867-kube-api-access-kfmwg\") pod \"neutron-646f7c7f9f-p9x92\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.322315 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:05 crc kubenswrapper[4835]: E0129 16:00:05.496448 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.529882 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9cd9cfb8-b9rfv" event={"ID":"a0965293-b9ac-4787-9e49-bea41b374da2","Type":"ContainerStarted","Data":"a5096d4d38021dc3d567a1dcf38f502d867b8b45f2ace9dae5fac61c20c2a5fb"} Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.529918 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9cd9cfb8-b9rfv" event={"ID":"a0965293-b9ac-4787-9e49-bea41b374da2","Type":"ContainerStarted","Data":"6e431c36ad6d0a848eadfc19d8639c4e34310d542e5006bc3b44806ddcd50497"} Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.530578 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.532648 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" event={"ID":"4d6ac59c-3290-44c5-a7fd-7f878afe393b","Type":"ContainerStarted","Data":"a66043698a03e5f4a0e41f575e68579bd9553912fae8d7306dee73d253a53b47"} Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.533220 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.557188 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.574000 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b9cd9cfb8-b9rfv" podStartSLOduration=3.573982002 podStartE2EDuration="3.573982002s" podCreationTimestamp="2026-01-29 16:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:05.555721791 +0000 UTC m=+1907.748765435" watchObservedRunningTime="2026-01-29 16:00:05.573982002 +0000 UTC m=+1907.767025656" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.586695 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" podStartSLOduration=3.586672352 podStartE2EDuration="3.586672352s" podCreationTimestamp="2026-01-29 16:00:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:05.582659911 +0000 UTC m=+1907.775703565" watchObservedRunningTime="2026-01-29 16:00:05.586672352 +0000 UTC m=+1907.779716006" Jan 29 16:00:05 crc kubenswrapper[4835]: I0129 16:00:05.932629 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-646f7c7f9f-p9x92"] Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.505443 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c38f85e-331d-4c4e-bff4-d22762fd36ff" path="/var/lib/kubelet/pods/5c38f85e-331d-4c4e-bff4-d22762fd36ff/volumes" Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.546064 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerStarted","Data":"5516bdfaa23864efb5fc44de188fefbcdf01970d82a25572aeee9e084ba90091"} Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.546116 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerStarted","Data":"37e3716ae8795fa9014b203855b7a68527c409d2288ecb0c91b08d5ef9323f06"} Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.552274 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-646f7c7f9f-p9x92" event={"ID":"fda576f5-4653-4cad-82de-c734c6c7c867","Type":"ContainerStarted","Data":"af2ec1e150789b7833e77af35517e4ab23bbeb30ea3652132914ac5c56504042"} Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.552336 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.552351 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-646f7c7f9f-p9x92" event={"ID":"fda576f5-4653-4cad-82de-c734c6c7c867","Type":"ContainerStarted","Data":"d1037751e301323740b1f50d8b6f17e6d9cc7df3812be6e85cd54495a28354ad"} Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.552365 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-646f7c7f9f-p9x92" event={"ID":"fda576f5-4653-4cad-82de-c734c6c7c867","Type":"ContainerStarted","Data":"588fca982430b9bab96c94e315015a3a0aa40da494abb82aa015b0f738527a0f"} Jan 29 16:00:06 crc kubenswrapper[4835]: I0129 16:00:06.576359 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-646f7c7f9f-p9x92" podStartSLOduration=2.576341717 podStartE2EDuration="2.576341717s" podCreationTimestamp="2026-01-29 16:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:06.574744547 +0000 UTC m=+1908.767788221" watchObservedRunningTime="2026-01-29 16:00:06.576341717 +0000 UTC m=+1908.769385371" Jan 29 16:00:07 crc kubenswrapper[4835]: I0129 16:00:07.562278 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerStarted","Data":"bf81ae63675939faa370d607089084f73cca18b4a9bd611384452ddaa73af5fb"} Jan 29 16:00:08 crc kubenswrapper[4835]: I0129 16:00:08.573034 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerStarted","Data":"0cd19f03d1c67cf55ccb250efd5461dd82ea636afb2365f1c88401f00d2d236a"} Jan 29 16:00:09 crc kubenswrapper[4835]: E0129 16:00:09.216073 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:00:09 crc kubenswrapper[4835]: E0129 16:00:09.216259 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cl2g8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b96385bd-ab7f-4e1c-ba57-b1a5e0d49736): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:00:09 crc kubenswrapper[4835]: E0129 16:00:09.217420 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" Jan 29 16:00:09 crc kubenswrapper[4835]: E0129 16:00:09.584413 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" Jan 29 16:00:12 crc kubenswrapper[4835]: I0129 16:00:12.924484 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.005850 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c6fbb54f-qgfqq"] Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.006085 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" containerName="dnsmasq-dns" containerID="cri-o://c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24" gracePeriod=10 Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.577101 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.634709 4835 generic.go:334] "Generic (PLEG): container finished" podID="48c1d56e-700b-4165-9cf2-473a90148c08" containerID="c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24" exitCode=0 Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.634753 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" event={"ID":"48c1d56e-700b-4165-9cf2-473a90148c08","Type":"ContainerDied","Data":"c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24"} Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.634777 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" event={"ID":"48c1d56e-700b-4165-9cf2-473a90148c08","Type":"ContainerDied","Data":"62ed10d94853989b4064e73a66f41e9e81ba84cd09d2a74932778e95aa911a8a"} Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.634799 4835 scope.go:117] "RemoveContainer" containerID="c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.634882 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c6fbb54f-qgfqq" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.664256 4835 scope.go:117] "RemoveContainer" containerID="58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.681869 4835 scope.go:117] "RemoveContainer" containerID="c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24" Jan 29 16:00:13 crc kubenswrapper[4835]: E0129 16:00:13.682430 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24\": container with ID starting with c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24 not found: ID does not exist" containerID="c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.682504 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24"} err="failed to get container status \"c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24\": rpc error: code = NotFound desc = could not find container \"c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24\": container with ID starting with c04a20c20817bd8d6d49db95dfb5e5eee69321acde89b880a770d3be34c88e24 not found: ID does not exist" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.682542 4835 scope.go:117] "RemoveContainer" containerID="58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611" Jan 29 16:00:13 crc kubenswrapper[4835]: E0129 16:00:13.683200 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611\": container with ID starting with 58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611 not found: ID does not exist" containerID="58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.683238 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611"} err="failed to get container status \"58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611\": rpc error: code = NotFound desc = could not find container \"58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611\": container with ID starting with 58b7356e13c72046eb6d64b25166aba7b412fc8babeae97cb23b9fa5c9ed2611 not found: ID does not exist" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.734341 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-nb\") pod \"48c1d56e-700b-4165-9cf2-473a90148c08\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.734396 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-sb\") pod \"48c1d56e-700b-4165-9cf2-473a90148c08\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.734567 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-swift-storage-0\") pod \"48c1d56e-700b-4165-9cf2-473a90148c08\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.734624 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljtxd\" (UniqueName: \"kubernetes.io/projected/48c1d56e-700b-4165-9cf2-473a90148c08-kube-api-access-ljtxd\") pod \"48c1d56e-700b-4165-9cf2-473a90148c08\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.734672 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-svc\") pod \"48c1d56e-700b-4165-9cf2-473a90148c08\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.734706 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-config\") pod \"48c1d56e-700b-4165-9cf2-473a90148c08\" (UID: \"48c1d56e-700b-4165-9cf2-473a90148c08\") " Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.756717 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48c1d56e-700b-4165-9cf2-473a90148c08-kube-api-access-ljtxd" (OuterVolumeSpecName: "kube-api-access-ljtxd") pod "48c1d56e-700b-4165-9cf2-473a90148c08" (UID: "48c1d56e-700b-4165-9cf2-473a90148c08"). InnerVolumeSpecName "kube-api-access-ljtxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.807402 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48c1d56e-700b-4165-9cf2-473a90148c08" (UID: "48c1d56e-700b-4165-9cf2-473a90148c08"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.811787 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48c1d56e-700b-4165-9cf2-473a90148c08" (UID: "48c1d56e-700b-4165-9cf2-473a90148c08"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.821294 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48c1d56e-700b-4165-9cf2-473a90148c08" (UID: "48c1d56e-700b-4165-9cf2-473a90148c08"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.822760 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48c1d56e-700b-4165-9cf2-473a90148c08" (UID: "48c1d56e-700b-4165-9cf2-473a90148c08"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.827692 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-config" (OuterVolumeSpecName: "config") pod "48c1d56e-700b-4165-9cf2-473a90148c08" (UID: "48c1d56e-700b-4165-9cf2-473a90148c08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.838227 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.838263 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.838276 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.838287 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljtxd\" (UniqueName: \"kubernetes.io/projected/48c1d56e-700b-4165-9cf2-473a90148c08-kube-api-access-ljtxd\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.838301 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.838313 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c1d56e-700b-4165-9cf2-473a90148c08-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.977080 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c6fbb54f-qgfqq"] Jan 29 16:00:13 crc kubenswrapper[4835]: I0129 16:00:13.985291 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84c6fbb54f-qgfqq"] Jan 29 16:00:14 crc kubenswrapper[4835]: I0129 16:00:14.503319 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" path="/var/lib/kubelet/pods/48c1d56e-700b-4165-9cf2-473a90148c08/volumes" Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.063216 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.064227 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-central-agent" containerID="cri-o://5516bdfaa23864efb5fc44de188fefbcdf01970d82a25572aeee9e084ba90091" gracePeriod=30 Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.064999 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="sg-core" containerID="cri-o://0cd19f03d1c67cf55ccb250efd5461dd82ea636afb2365f1c88401f00d2d236a" gracePeriod=30 Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.065066 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-notification-agent" containerID="cri-o://bf81ae63675939faa370d607089084f73cca18b4a9bd611384452ddaa73af5fb" gracePeriod=30 Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.693020 4835 generic.go:334] "Generic (PLEG): container finished" podID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerID="0cd19f03d1c67cf55ccb250efd5461dd82ea636afb2365f1c88401f00d2d236a" exitCode=2 Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.693355 4835 generic.go:334] "Generic (PLEG): container finished" podID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerID="5516bdfaa23864efb5fc44de188fefbcdf01970d82a25572aeee9e084ba90091" exitCode=0 Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.693151 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerDied","Data":"0cd19f03d1c67cf55ccb250efd5461dd82ea636afb2365f1c88401f00d2d236a"} Jan 29 16:00:19 crc kubenswrapper[4835]: I0129 16:00:19.693397 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerDied","Data":"5516bdfaa23864efb5fc44de188fefbcdf01970d82a25572aeee9e084ba90091"} Jan 29 16:00:20 crc kubenswrapper[4835]: E0129 16:00:20.494854 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 16:00:20 crc kubenswrapper[4835]: I0129 16:00:20.703148 4835 generic.go:334] "Generic (PLEG): container finished" podID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerID="bf81ae63675939faa370d607089084f73cca18b4a9bd611384452ddaa73af5fb" exitCode=0 Jan 29 16:00:20 crc kubenswrapper[4835]: I0129 16:00:20.703188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerDied","Data":"bf81ae63675939faa370d607089084f73cca18b4a9bd611384452ddaa73af5fb"} Jan 29 16:00:21 crc kubenswrapper[4835]: I0129 16:00:21.886958 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:21.999937 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-scripts\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000036 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-run-httpd\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000085 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-sg-core-conf-yaml\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000120 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-combined-ca-bundle\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000185 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-log-httpd\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000340 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-config-data\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000442 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2g8\" (UniqueName: \"kubernetes.io/projected/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-kube-api-access-cl2g8\") pod \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\" (UID: \"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736\") " Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.000905 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.001539 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.001559 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.008811 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-kube-api-access-cl2g8" (OuterVolumeSpecName: "kube-api-access-cl2g8") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "kube-api-access-cl2g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.009146 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-scripts" (OuterVolumeSpecName: "scripts") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.049340 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-config-data" (OuterVolumeSpecName: "config-data") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.049938 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.050663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" (UID: "b96385bd-ab7f-4e1c-ba57-b1a5e0d49736"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.104171 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2g8\" (UniqueName: \"kubernetes.io/projected/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-kube-api-access-cl2g8\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.104205 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.104220 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.104233 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.104244 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.727022 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b96385bd-ab7f-4e1c-ba57-b1a5e0d49736","Type":"ContainerDied","Data":"37e3716ae8795fa9014b203855b7a68527c409d2288ecb0c91b08d5ef9323f06"} Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.727092 4835 scope.go:117] "RemoveContainer" containerID="0cd19f03d1c67cf55ccb250efd5461dd82ea636afb2365f1c88401f00d2d236a" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.727288 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.748782 4835 scope.go:117] "RemoveContainer" containerID="bf81ae63675939faa370d607089084f73cca18b4a9bd611384452ddaa73af5fb" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.778689 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.789529 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.794198 4835 scope.go:117] "RemoveContainer" containerID="5516bdfaa23864efb5fc44de188fefbcdf01970d82a25572aeee9e084ba90091" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.800678 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:22 crc kubenswrapper[4835]: E0129 16:00:22.801154 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-notification-agent" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801179 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-notification-agent" Jan 29 16:00:22 crc kubenswrapper[4835]: E0129 16:00:22.801209 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-central-agent" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801218 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-central-agent" Jan 29 16:00:22 crc kubenswrapper[4835]: E0129 16:00:22.801246 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" containerName="dnsmasq-dns" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801253 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" containerName="dnsmasq-dns" Jan 29 16:00:22 crc kubenswrapper[4835]: E0129 16:00:22.801281 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" containerName="init" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801288 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" containerName="init" Jan 29 16:00:22 crc kubenswrapper[4835]: E0129 16:00:22.801298 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="sg-core" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801305 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="sg-core" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801522 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="sg-core" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801544 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="48c1d56e-700b-4165-9cf2-473a90148c08" containerName="dnsmasq-dns" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801581 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-central-agent" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.801600 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" containerName="ceilometer-notification-agent" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.803767 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.805566 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.805893 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.818667 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.923231 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.923400 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-config-data\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.923449 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-log-httpd\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.923523 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-run-httpd\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.923562 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.923883 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-scripts\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:22 crc kubenswrapper[4835]: I0129 16:00:22.924091 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xxlm\" (UniqueName: \"kubernetes.io/projected/39fdefdc-1d93-4567-9d20-33f3c3c20051-kube-api-access-9xxlm\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.026199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-scripts\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.026882 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xxlm\" (UniqueName: \"kubernetes.io/projected/39fdefdc-1d93-4567-9d20-33f3c3c20051-kube-api-access-9xxlm\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.026990 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.027091 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-config-data\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.027175 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-log-httpd\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.027267 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-run-httpd\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.028568 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.028156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-run-httpd\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.027934 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-log-httpd\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.031797 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-scripts\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.034114 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.040822 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.042296 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-config-data\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.046106 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xxlm\" (UniqueName: \"kubernetes.io/projected/39fdefdc-1d93-4567-9d20-33f3c3c20051-kube-api-access-9xxlm\") pod \"ceilometer-0\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.124507 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.561628 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:00:23 crc kubenswrapper[4835]: W0129 16:00:23.565190 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39fdefdc_1d93_4567_9d20_33f3c3c20051.slice/crio-41f963ea175524a778168a5e7f8f10e8746ae526ccc8704cb39410270f3a0006 WatchSource:0}: Error finding container 41f963ea175524a778168a5e7f8f10e8746ae526ccc8704cb39410270f3a0006: Status 404 returned error can't find the container with id 41f963ea175524a778168a5e7f8f10e8746ae526ccc8704cb39410270f3a0006 Jan 29 16:00:23 crc kubenswrapper[4835]: I0129 16:00:23.737716 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerStarted","Data":"41f963ea175524a778168a5e7f8f10e8746ae526ccc8704cb39410270f3a0006"} Jan 29 16:00:24 crc kubenswrapper[4835]: I0129 16:00:24.509700 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96385bd-ab7f-4e1c-ba57-b1a5e0d49736" path="/var/lib/kubelet/pods/b96385bd-ab7f-4e1c-ba57-b1a5e0d49736/volumes" Jan 29 16:00:25 crc kubenswrapper[4835]: I0129 16:00:25.760716 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerStarted","Data":"df1ae7ef1baa5c24dbe5f47dc713db2de333f1ad6a4144d55433a36d695a3716"} Jan 29 16:00:26 crc kubenswrapper[4835]: I0129 16:00:26.770868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerStarted","Data":"9a55065beb00a16ee66830598aa3988f30bca1e1c7404cd78f965176724c270d"} Jan 29 16:00:28 crc kubenswrapper[4835]: E0129 16:00:28.697031 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79" Jan 29 16:00:28 crc kubenswrapper[4835]: E0129 16:00:28.697938 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9xxlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(39fdefdc-1d93-4567-9d20-33f3c3c20051): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:00:28 crc kubenswrapper[4835]: E0129 16:00:28.699435 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" Jan 29 16:00:28 crc kubenswrapper[4835]: I0129 16:00:28.792270 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerStarted","Data":"815a87a6496ef6eef743cf2be54f4fb51b0176be5179f4ab3e8d95b561a5a999"} Jan 29 16:00:28 crc kubenswrapper[4835]: E0129 16:00:28.794112 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" Jan 29 16:00:29 crc kubenswrapper[4835]: E0129 16:00:29.802618 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24@sha256:47a0b3f12211320d1828524a324ab3ec9deac97c17b9d3f056c87d3384d9eb79\\\"\"" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" Jan 29 16:00:31 crc kubenswrapper[4835]: E0129 16:00:31.493955 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" Jan 29 16:00:33 crc kubenswrapper[4835]: I0129 16:00:33.117381 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:35 crc kubenswrapper[4835]: I0129 16:00:35.334547 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:00:35 crc kubenswrapper[4835]: I0129 16:00:35.402350 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b9cd9cfb8-b9rfv"] Jan 29 16:00:35 crc kubenswrapper[4835]: I0129 16:00:35.402596 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b9cd9cfb8-b9rfv" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-api" containerID="cri-o://6e431c36ad6d0a848eadfc19d8639c4e34310d542e5006bc3b44806ddcd50497" gracePeriod=30 Jan 29 16:00:35 crc kubenswrapper[4835]: I0129 16:00:35.402732 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-b9cd9cfb8-b9rfv" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-httpd" containerID="cri-o://a5096d4d38021dc3d567a1dcf38f502d867b8b45f2ace9dae5fac61c20c2a5fb" gracePeriod=30 Jan 29 16:00:35 crc kubenswrapper[4835]: I0129 16:00:35.858145 4835 generic.go:334] "Generic (PLEG): container finished" podID="a0965293-b9ac-4787-9e49-bea41b374da2" containerID="a5096d4d38021dc3d567a1dcf38f502d867b8b45f2ace9dae5fac61c20c2a5fb" exitCode=0 Jan 29 16:00:35 crc kubenswrapper[4835]: I0129 16:00:35.858233 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9cd9cfb8-b9rfv" event={"ID":"a0965293-b9ac-4787-9e49-bea41b374da2","Type":"ContainerDied","Data":"a5096d4d38021dc3d567a1dcf38f502d867b8b45f2ace9dae5fac61c20c2a5fb"} Jan 29 16:00:38 crc kubenswrapper[4835]: I0129 16:00:38.892590 4835 generic.go:334] "Generic (PLEG): container finished" podID="a0965293-b9ac-4787-9e49-bea41b374da2" containerID="6e431c36ad6d0a848eadfc19d8639c4e34310d542e5006bc3b44806ddcd50497" exitCode=0 Jan 29 16:00:38 crc kubenswrapper[4835]: I0129 16:00:38.892701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9cd9cfb8-b9rfv" event={"ID":"a0965293-b9ac-4787-9e49-bea41b374da2","Type":"ContainerDied","Data":"6e431c36ad6d0a848eadfc19d8639c4e34310d542e5006bc3b44806ddcd50497"} Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.168234 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.368481 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-httpd-config\") pod \"a0965293-b9ac-4787-9e49-bea41b374da2\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.368533 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-combined-ca-bundle\") pod \"a0965293-b9ac-4787-9e49-bea41b374da2\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.368672 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw78d\" (UniqueName: \"kubernetes.io/projected/a0965293-b9ac-4787-9e49-bea41b374da2-kube-api-access-jw78d\") pod \"a0965293-b9ac-4787-9e49-bea41b374da2\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.369773 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-config\") pod \"a0965293-b9ac-4787-9e49-bea41b374da2\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.369811 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-ovndb-tls-certs\") pod \"a0965293-b9ac-4787-9e49-bea41b374da2\" (UID: \"a0965293-b9ac-4787-9e49-bea41b374da2\") " Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.375709 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a0965293-b9ac-4787-9e49-bea41b374da2" (UID: "a0965293-b9ac-4787-9e49-bea41b374da2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.375860 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0965293-b9ac-4787-9e49-bea41b374da2-kube-api-access-jw78d" (OuterVolumeSpecName: "kube-api-access-jw78d") pod "a0965293-b9ac-4787-9e49-bea41b374da2" (UID: "a0965293-b9ac-4787-9e49-bea41b374da2"). InnerVolumeSpecName "kube-api-access-jw78d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.422101 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-config" (OuterVolumeSpecName: "config") pod "a0965293-b9ac-4787-9e49-bea41b374da2" (UID: "a0965293-b9ac-4787-9e49-bea41b374da2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.424294 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0965293-b9ac-4787-9e49-bea41b374da2" (UID: "a0965293-b9ac-4787-9e49-bea41b374da2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.443192 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a0965293-b9ac-4787-9e49-bea41b374da2" (UID: "a0965293-b9ac-4787-9e49-bea41b374da2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.471284 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.471318 4835 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.471329 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.471338 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0965293-b9ac-4787-9e49-bea41b374da2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.471347 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw78d\" (UniqueName: \"kubernetes.io/projected/a0965293-b9ac-4787-9e49-bea41b374da2-kube-api-access-jw78d\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.907701 4835 generic.go:334] "Generic (PLEG): container finished" podID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" containerID="3b5ac41ff2999bbe5b233b240b25d7210474d04899fa404444643a14fe3a63b8" exitCode=0 Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.907788 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" event={"ID":"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858","Type":"ContainerDied","Data":"3b5ac41ff2999bbe5b233b240b25d7210474d04899fa404444643a14fe3a63b8"} Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.912551 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9cd9cfb8-b9rfv" event={"ID":"a0965293-b9ac-4787-9e49-bea41b374da2","Type":"ContainerDied","Data":"f74f39d6f24e8d10e257c798d27e6721067abba5577b35abc2e93f3912e49a9a"} Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.912632 4835 scope.go:117] "RemoveContainer" containerID="a5096d4d38021dc3d567a1dcf38f502d867b8b45f2ace9dae5fac61c20c2a5fb" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.912758 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b9cd9cfb8-b9rfv" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.952854 4835 scope.go:117] "RemoveContainer" containerID="6e431c36ad6d0a848eadfc19d8639c4e34310d542e5006bc3b44806ddcd50497" Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.956423 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b9cd9cfb8-b9rfv"] Jan 29 16:00:39 crc kubenswrapper[4835]: I0129 16:00:39.965354 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b9cd9cfb8-b9rfv"] Jan 29 16:00:40 crc kubenswrapper[4835]: I0129 16:00:40.503945 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" path="/var/lib/kubelet/pods/a0965293-b9ac-4787-9e49-bea41b374da2/volumes" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.272784 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.406860 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-scripts\") pod \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.407066 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx4qk\" (UniqueName: \"kubernetes.io/projected/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-kube-api-access-wx4qk\") pod \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.407177 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-config-data\") pod \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.407238 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-combined-ca-bundle\") pod \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\" (UID: \"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858\") " Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.412981 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-kube-api-access-wx4qk" (OuterVolumeSpecName: "kube-api-access-wx4qk") pod "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" (UID: "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858"). InnerVolumeSpecName "kube-api-access-wx4qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.413434 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-scripts" (OuterVolumeSpecName: "scripts") pod "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" (UID: "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.435153 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" (UID: "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.443339 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-config-data" (OuterVolumeSpecName: "config-data") pod "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" (UID: "3b5cbfb3-cf93-4a36-ab1d-d64d65e68858"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.510845 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.510882 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx4qk\" (UniqueName: \"kubernetes.io/projected/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-kube-api-access-wx4qk\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.510892 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.510929 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.945334 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" event={"ID":"3b5cbfb3-cf93-4a36-ab1d-d64d65e68858","Type":"ContainerDied","Data":"e2f4ce36ae593e510abcdf77c1216121a92542f6066cdff81504e3e2406657e4"} Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.945388 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f4ce36ae593e510abcdf77c1216121a92542f6066cdff81504e3e2406657e4" Jan 29 16:00:41 crc kubenswrapper[4835]: I0129 16:00:41.945793 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7cvqt" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.048666 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:00:42 crc kubenswrapper[4835]: E0129 16:00:42.049280 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" containerName="nova-cell0-conductor-db-sync" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.049302 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" containerName="nova-cell0-conductor-db-sync" Jan 29 16:00:42 crc kubenswrapper[4835]: E0129 16:00:42.049331 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-api" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.049337 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-api" Jan 29 16:00:42 crc kubenswrapper[4835]: E0129 16:00:42.049347 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-httpd" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.049354 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-httpd" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.049578 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-httpd" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.049602 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0965293-b9ac-4787-9e49-bea41b374da2" containerName="neutron-api" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.049616 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" containerName="nova-cell0-conductor-db-sync" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.050225 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.054733 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.055277 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hcpz7" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.058393 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.119308 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9db\" (UniqueName: \"kubernetes.io/projected/2f4896cd-93c6-4708-8f62-f94cedda3fc0-kube-api-access-tw9db\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.119587 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.119630 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.220940 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.221088 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw9db\" (UniqueName: \"kubernetes.io/projected/2f4896cd-93c6-4708-8f62-f94cedda3fc0-kube-api-access-tw9db\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.221126 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.225902 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.228013 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.238877 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw9db\" (UniqueName: \"kubernetes.io/projected/2f4896cd-93c6-4708-8f62-f94cedda3fc0-kube-api-access-tw9db\") pod \"nova-cell0-conductor-0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.383057 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.814443 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:00:42 crc kubenswrapper[4835]: I0129 16:00:42.955494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f4896cd-93c6-4708-8f62-f94cedda3fc0","Type":"ContainerStarted","Data":"96be128fc97936ffec97484a2fc682323a7ecb55596704d96d77fc5a7a7ff66f"} Jan 29 16:00:43 crc kubenswrapper[4835]: I0129 16:00:43.968067 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f4896cd-93c6-4708-8f62-f94cedda3fc0","Type":"ContainerStarted","Data":"33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1"} Jan 29 16:00:43 crc kubenswrapper[4835]: I0129 16:00:43.968422 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:44 crc kubenswrapper[4835]: I0129 16:00:44.983037 4835 generic.go:334] "Generic (PLEG): container finished" podID="9068c986-c447-405a-8620-069aa4d48469" containerID="f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a" exitCode=0 Jan 29 16:00:44 crc kubenswrapper[4835]: I0129 16:00:44.983876 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrwpz" event={"ID":"9068c986-c447-405a-8620-069aa4d48469","Type":"ContainerDied","Data":"f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a"} Jan 29 16:00:45 crc kubenswrapper[4835]: I0129 16:00:45.005223 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.005202292 podStartE2EDuration="3.005202292s" podCreationTimestamp="2026-01-29 16:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:43.988864853 +0000 UTC m=+1946.181908557" watchObservedRunningTime="2026-01-29 16:00:45.005202292 +0000 UTC m=+1947.198245946" Jan 29 16:00:45 crc kubenswrapper[4835]: I0129 16:00:45.994052 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrwpz" event={"ID":"9068c986-c447-405a-8620-069aa4d48469","Type":"ContainerStarted","Data":"e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc"} Jan 29 16:00:46 crc kubenswrapper[4835]: I0129 16:00:46.016938 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrwpz" podStartSLOduration=7.454883658 podStartE2EDuration="1m41.016918404s" podCreationTimestamp="2026-01-29 15:59:05 +0000 UTC" firstStartedPulling="2026-01-29 15:59:11.876169452 +0000 UTC m=+1854.069213106" lastFinishedPulling="2026-01-29 16:00:45.438204198 +0000 UTC m=+1947.631247852" observedRunningTime="2026-01-29 16:00:46.008770108 +0000 UTC m=+1948.201813782" watchObservedRunningTime="2026-01-29 16:00:46.016918404 +0000 UTC m=+1948.209962058" Jan 29 16:00:47 crc kubenswrapper[4835]: I0129 16:00:47.414085 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 16:00:47 crc kubenswrapper[4835]: I0129 16:00:47.950986 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m6p7g"] Jan 29 16:00:47 crc kubenswrapper[4835]: I0129 16:00:47.952483 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:47 crc kubenswrapper[4835]: I0129 16:00:47.956544 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 16:00:47 crc kubenswrapper[4835]: I0129 16:00:47.956641 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 16:00:47 crc kubenswrapper[4835]: I0129 16:00:47.963947 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6p7g"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.041661 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.042056 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5rrp\" (UniqueName: \"kubernetes.io/projected/28e05047-7318-4cd2-b46d-6f668be25f1d-kube-api-access-w5rrp\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.042130 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-scripts\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.042157 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-config-data\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.120417 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.121965 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.126237 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.150073 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151065 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-scripts\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151095 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-config-data\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151149 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rx2\" (UniqueName: \"kubernetes.io/projected/ede2cd52-c131-4cfa-8bd2-415d45e4276b-kube-api-access-85rx2\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151178 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151196 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151213 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede2cd52-c131-4cfa-8bd2-415d45e4276b-logs\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151256 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-config-data\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.151277 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5rrp\" (UniqueName: \"kubernetes.io/projected/28e05047-7318-4cd2-b46d-6f668be25f1d-kube-api-access-w5rrp\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.161503 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-scripts\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.176383 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.178361 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-config-data\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.225126 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5rrp\" (UniqueName: \"kubernetes.io/projected/28e05047-7318-4cd2-b46d-6f668be25f1d-kube-api-access-w5rrp\") pod \"nova-cell0-cell-mapping-m6p7g\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.261763 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rx2\" (UniqueName: \"kubernetes.io/projected/ede2cd52-c131-4cfa-8bd2-415d45e4276b-kube-api-access-85rx2\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.261827 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.261845 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede2cd52-c131-4cfa-8bd2-415d45e4276b-logs\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.261896 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-config-data\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.266238 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede2cd52-c131-4cfa-8bd2-415d45e4276b-logs\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.268457 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.269257 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-config-data\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.273010 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.334539 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rx2\" (UniqueName: \"kubernetes.io/projected/ede2cd52-c131-4cfa-8bd2-415d45e4276b-kube-api-access-85rx2\") pod \"nova-api-0\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.410020 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.411447 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.418937 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.427031 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.428414 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.438880 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.439390 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.569762 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25xq\" (UniqueName: \"kubernetes.io/projected/342971ff-33d6-459e-9767-7950c42be1fd-kube-api-access-p25xq\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.570073 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.570155 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmql\" (UniqueName: \"kubernetes.io/projected/347c5666-f349-4545-9c0c-a564a844054f-kube-api-access-mnmql\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.570172 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347c5666-f349-4545-9c0c-a564a844054f-logs\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.570209 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-config-data\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.570231 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.570279 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675569 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmql\" (UniqueName: \"kubernetes.io/projected/347c5666-f349-4545-9c0c-a564a844054f-kube-api-access-mnmql\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675614 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347c5666-f349-4545-9c0c-a564a844054f-logs\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675652 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-config-data\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675679 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675757 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675836 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p25xq\" (UniqueName: \"kubernetes.io/projected/342971ff-33d6-459e-9767-7950c42be1fd-kube-api-access-p25xq\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.675863 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.677395 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347c5666-f349-4545-9c0c-a564a844054f-logs\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.688204 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-config-data\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.688733 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.695441 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.695506 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-config-data\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.700628 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p25xq\" (UniqueName: \"kubernetes.io/projected/342971ff-33d6-459e-9767-7950c42be1fd-kube-api-access-p25xq\") pod \"nova-scheduler-0\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.706996 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmql\" (UniqueName: \"kubernetes.io/projected/347c5666-f349-4545-9c0c-a564a844054f-kube-api-access-mnmql\") pod \"nova-metadata-0\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.753898 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.760266 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.760316 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.760332 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.763293 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.766844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.766940 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.767743 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bfb54f9b5-h854c"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.776743 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.777742 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bfb54f9b5-h854c"] Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.777832 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.879681 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxdf\" (UniqueName: \"kubernetes.io/projected/81c31e6b-d1e2-4178-a905-529bf3b59f4b-kube-api-access-dzxdf\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.879806 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.879897 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.879974 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.880134 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvd7v\" (UniqueName: \"kubernetes.io/projected/82569a58-8a40-423d-8d86-ed3bf74d59d6-kube-api-access-zvd7v\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.880215 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-config\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.880313 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-svc\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.880388 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.880507 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982326 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982409 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxdf\" (UniqueName: \"kubernetes.io/projected/81c31e6b-d1e2-4178-a905-529bf3b59f4b-kube-api-access-dzxdf\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982444 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982478 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982499 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982588 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvd7v\" (UniqueName: \"kubernetes.io/projected/82569a58-8a40-423d-8d86-ed3bf74d59d6-kube-api-access-zvd7v\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982607 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-config\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982636 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-svc\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.982659 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.983281 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-swift-storage-0\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.984524 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-config\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.984719 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-sb\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.984817 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-nb\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.985522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-svc\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.988166 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:48 crc kubenswrapper[4835]: I0129 16:00:48.990822 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.006895 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvd7v\" (UniqueName: \"kubernetes.io/projected/82569a58-8a40-423d-8d86-ed3bf74d59d6-kube-api-access-zvd7v\") pod \"dnsmasq-dns-5bfb54f9b5-h854c\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.022691 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerStarted","Data":"efef5c2434c64fc498f94187b303b93ccc7229269b7c6f889de76a7809116c0b"} Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.022926 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.081035 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.844072323 podStartE2EDuration="27.081005192s" podCreationTimestamp="2026-01-29 16:00:22 +0000 UTC" firstStartedPulling="2026-01-29 16:00:23.568731763 +0000 UTC m=+1925.761775417" lastFinishedPulling="2026-01-29 16:00:47.805664632 +0000 UTC m=+1949.998708286" observedRunningTime="2026-01-29 16:00:49.050943463 +0000 UTC m=+1951.243987117" watchObservedRunningTime="2026-01-29 16:00:49.081005192 +0000 UTC m=+1951.274048846" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.115226 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6p7g"] Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.196290 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.266178 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.380982 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.448568 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhhrj"] Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.450091 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.453044 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.453144 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.462724 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhhrj"] Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.493107 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-config-data\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.493160 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-scripts\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.493323 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.493886 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjstr\" (UniqueName: \"kubernetes.io/projected/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-kube-api-access-kjstr\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.520048 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.580440 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxdf\" (UniqueName: \"kubernetes.io/projected/81c31e6b-d1e2-4178-a905-529bf3b59f4b-kube-api-access-dzxdf\") pod \"nova-cell1-novncproxy-0\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:49 crc kubenswrapper[4835]: W0129 16:00:49.585784 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28e05047_7318_4cd2_b46d_6f668be25f1d.slice/crio-6cdfa51c055c1ac3f16efa9880da6a27fa09053fd6469d80130296c4ddc800b5 WatchSource:0}: Error finding container 6cdfa51c055c1ac3f16efa9880da6a27fa09053fd6469d80130296c4ddc800b5: Status 404 returned error can't find the container with id 6cdfa51c055c1ac3f16efa9880da6a27fa09053fd6469d80130296c4ddc800b5 Jan 29 16:00:49 crc kubenswrapper[4835]: W0129 16:00:49.588142 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede2cd52_c131_4cfa_8bd2_415d45e4276b.slice/crio-a821d71c095c631704b04c1e8047d7e8d18b363afc838fb5e91d675bf760ad80 WatchSource:0}: Error finding container a821d71c095c631704b04c1e8047d7e8d18b363afc838fb5e91d675bf760ad80: Status 404 returned error can't find the container with id a821d71c095c631704b04c1e8047d7e8d18b363afc838fb5e91d675bf760ad80 Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.595408 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjstr\" (UniqueName: \"kubernetes.io/projected/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-kube-api-access-kjstr\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.595534 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-config-data\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.595554 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-scripts\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.595605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.600645 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-config-data\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.608896 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-scripts\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.609450 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.615294 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjstr\" (UniqueName: \"kubernetes.io/projected/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-kube-api-access-kjstr\") pod \"nova-cell1-conductor-db-sync-lhhrj\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.853983 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.879665 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:00:49 crc kubenswrapper[4835]: I0129 16:00:49.907013 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bfb54f9b5-h854c"] Jan 29 16:00:49 crc kubenswrapper[4835]: W0129 16:00:49.926713 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82569a58_8a40_423d_8d86_ed3bf74d59d6.slice/crio-9c03770b052ec6884dc99ab74f1866c5d65116866a18915c736571466cbd4800 WatchSource:0}: Error finding container 9c03770b052ec6884dc99ab74f1866c5d65116866a18915c736571466cbd4800: Status 404 returned error can't find the container with id 9c03770b052ec6884dc99ab74f1866c5d65116866a18915c736571466cbd4800 Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.072494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ede2cd52-c131-4cfa-8bd2-415d45e4276b","Type":"ContainerStarted","Data":"a821d71c095c631704b04c1e8047d7e8d18b363afc838fb5e91d675bf760ad80"} Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.080703 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" event={"ID":"82569a58-8a40-423d-8d86-ed3bf74d59d6","Type":"ContainerStarted","Data":"9c03770b052ec6884dc99ab74f1866c5d65116866a18915c736571466cbd4800"} Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.082588 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"347c5666-f349-4545-9c0c-a564a844054f","Type":"ContainerStarted","Data":"02aa9a6e0a4db4e5f30a4c44cb9d9a0e2e03619bd7f821f2fee453dfebe43ec9"} Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.084168 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342971ff-33d6-459e-9767-7950c42be1fd","Type":"ContainerStarted","Data":"9baf00ae3740a939f4eb9af4e8f9c1545605c5db21e43fff9815d45907f6018b"} Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.085426 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6p7g" event={"ID":"28e05047-7318-4cd2-b46d-6f668be25f1d","Type":"ContainerStarted","Data":"6cdfa51c055c1ac3f16efa9880da6a27fa09053fd6469d80130296c4ddc800b5"} Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.360022 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:00:50 crc kubenswrapper[4835]: W0129 16:00:50.364737 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81c31e6b_d1e2_4178_a905_529bf3b59f4b.slice/crio-53d76f47104ea2d5f5562cc48cb31a91bc3b33941bbae2ae7da4a78016f840c7 WatchSource:0}: Error finding container 53d76f47104ea2d5f5562cc48cb31a91bc3b33941bbae2ae7da4a78016f840c7: Status 404 returned error can't find the container with id 53d76f47104ea2d5f5562cc48cb31a91bc3b33941bbae2ae7da4a78016f840c7 Jan 29 16:00:50 crc kubenswrapper[4835]: I0129 16:00:50.461011 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhhrj"] Jan 29 16:00:51 crc kubenswrapper[4835]: I0129 16:00:51.094104 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81c31e6b-d1e2-4178-a905-529bf3b59f4b","Type":"ContainerStarted","Data":"53d76f47104ea2d5f5562cc48cb31a91bc3b33941bbae2ae7da4a78016f840c7"} Jan 29 16:00:51 crc kubenswrapper[4835]: I0129 16:00:51.095140 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" event={"ID":"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89","Type":"ContainerStarted","Data":"b229f53d3499664a57d14afcde93f6a20bef648188b3b747ec10c9a6309abb93"} Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.115442 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6p7g" event={"ID":"28e05047-7318-4cd2-b46d-6f668be25f1d","Type":"ContainerStarted","Data":"8d18ece74a776d4b42d09242899d749d6612e084ed5e056932402fba3ffdfebc"} Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.120868 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" event={"ID":"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89","Type":"ContainerStarted","Data":"1fcaed86489c7aecce529375985bed09cd24f7a06d51eb4a85cd96ac9381dfeb"} Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.125017 4835 generic.go:334] "Generic (PLEG): container finished" podID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerID="94626dd62055afecbe14f7670ed31c0c367a19f4ff148cde4b91f5446a2113ef" exitCode=0 Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.125046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" event={"ID":"82569a58-8a40-423d-8d86-ed3bf74d59d6","Type":"ContainerDied","Data":"94626dd62055afecbe14f7670ed31c0c367a19f4ff148cde4b91f5446a2113ef"} Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.137766 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m6p7g" podStartSLOduration=5.137749325 podStartE2EDuration="5.137749325s" podCreationTimestamp="2026-01-29 16:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:52.131875277 +0000 UTC m=+1954.324918941" watchObservedRunningTime="2026-01-29 16:00:52.137749325 +0000 UTC m=+1954.330792979" Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.219585 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" podStartSLOduration=3.219559882 podStartE2EDuration="3.219559882s" podCreationTimestamp="2026-01-29 16:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:52.206586034 +0000 UTC m=+1954.399629688" watchObservedRunningTime="2026-01-29 16:00:52.219559882 +0000 UTC m=+1954.412603536" Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.288495 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:52 crc kubenswrapper[4835]: I0129 16:00:52.300169 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:00:55 crc kubenswrapper[4835]: I0129 16:00:55.941235 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 16:00:55 crc kubenswrapper[4835]: I0129 16:00:55.943181 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 16:00:55 crc kubenswrapper[4835]: I0129 16:00:55.994834 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 16:00:56 crc kubenswrapper[4835]: I0129 16:00:56.209252 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 16:00:56 crc kubenswrapper[4835]: I0129 16:00:56.301702 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrwpz"] Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.173527 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" event={"ID":"82569a58-8a40-423d-8d86-ed3bf74d59d6","Type":"ContainerStarted","Data":"9befa5acfbb6bcdfabe6a343d563bb32b4c20b0d8f30705f544d57834a8eea07"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.173872 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.176367 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"347c5666-f349-4545-9c0c-a564a844054f","Type":"ContainerStarted","Data":"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.176403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"347c5666-f349-4545-9c0c-a564a844054f","Type":"ContainerStarted","Data":"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.176483 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-log" containerID="cri-o://9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86" gracePeriod=30 Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.176615 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-metadata" containerID="cri-o://f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a" gracePeriod=30 Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.188188 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81c31e6b-d1e2-4178-a905-529bf3b59f4b","Type":"ContainerStarted","Data":"2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.188361 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="81c31e6b-d1e2-4178-a905-529bf3b59f4b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181" gracePeriod=30 Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.197338 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" podStartSLOduration=9.197318332 podStartE2EDuration="9.197318332s" podCreationTimestamp="2026-01-29 16:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:00:57.19290549 +0000 UTC m=+1959.385949144" watchObservedRunningTime="2026-01-29 16:00:57.197318332 +0000 UTC m=+1959.390361996" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.210550 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342971ff-33d6-459e-9767-7950c42be1fd","Type":"ContainerStarted","Data":"e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.214412 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ede2cd52-c131-4cfa-8bd2-415d45e4276b","Type":"ContainerStarted","Data":"8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.214454 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ede2cd52-c131-4cfa-8bd2-415d45e4276b","Type":"ContainerStarted","Data":"352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e"} Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.223722 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.375543564 podStartE2EDuration="9.223704428s" podCreationTimestamp="2026-01-29 16:00:48 +0000 UTC" firstStartedPulling="2026-01-29 16:00:50.366214272 +0000 UTC m=+1952.559257916" lastFinishedPulling="2026-01-29 16:00:56.214375126 +0000 UTC m=+1958.407418780" observedRunningTime="2026-01-29 16:00:57.21667108 +0000 UTC m=+1959.409714744" watchObservedRunningTime="2026-01-29 16:00:57.223704428 +0000 UTC m=+1959.416748082" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.248295 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6602917699999997 podStartE2EDuration="9.248271109s" podCreationTimestamp="2026-01-29 16:00:48 +0000 UTC" firstStartedPulling="2026-01-29 16:00:49.614177878 +0000 UTC m=+1951.807221532" lastFinishedPulling="2026-01-29 16:00:56.202157217 +0000 UTC m=+1958.395200871" observedRunningTime="2026-01-29 16:00:57.237936657 +0000 UTC m=+1959.430980311" watchObservedRunningTime="2026-01-29 16:00:57.248271109 +0000 UTC m=+1959.441314763" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.268911 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.657197521 podStartE2EDuration="9.268892169s" podCreationTimestamp="2026-01-29 16:00:48 +0000 UTC" firstStartedPulling="2026-01-29 16:00:49.590616593 +0000 UTC m=+1951.783660247" lastFinishedPulling="2026-01-29 16:00:56.202311241 +0000 UTC m=+1958.395354895" observedRunningTime="2026-01-29 16:00:57.265244247 +0000 UTC m=+1959.458287901" watchObservedRunningTime="2026-01-29 16:00:57.268892169 +0000 UTC m=+1959.461935823" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.287047 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6931757100000002 podStartE2EDuration="9.287022347s" podCreationTimestamp="2026-01-29 16:00:48 +0000 UTC" firstStartedPulling="2026-01-29 16:00:49.60832987 +0000 UTC m=+1951.801373524" lastFinishedPulling="2026-01-29 16:00:56.202176507 +0000 UTC m=+1958.395220161" observedRunningTime="2026-01-29 16:00:57.284119464 +0000 UTC m=+1959.477163118" watchObservedRunningTime="2026-01-29 16:00:57.287022347 +0000 UTC m=+1959.480066001" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.825389 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.987048 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-combined-ca-bundle\") pod \"347c5666-f349-4545-9c0c-a564a844054f\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.987604 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347c5666-f349-4545-9c0c-a564a844054f-logs\") pod \"347c5666-f349-4545-9c0c-a564a844054f\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.987697 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmql\" (UniqueName: \"kubernetes.io/projected/347c5666-f349-4545-9c0c-a564a844054f-kube-api-access-mnmql\") pod \"347c5666-f349-4545-9c0c-a564a844054f\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.987726 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-config-data\") pod \"347c5666-f349-4545-9c0c-a564a844054f\" (UID: \"347c5666-f349-4545-9c0c-a564a844054f\") " Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.987933 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/347c5666-f349-4545-9c0c-a564a844054f-logs" (OuterVolumeSpecName: "logs") pod "347c5666-f349-4545-9c0c-a564a844054f" (UID: "347c5666-f349-4545-9c0c-a564a844054f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.988347 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/347c5666-f349-4545-9c0c-a564a844054f-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:57 crc kubenswrapper[4835]: I0129 16:00:57.992989 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347c5666-f349-4545-9c0c-a564a844054f-kube-api-access-mnmql" (OuterVolumeSpecName: "kube-api-access-mnmql") pod "347c5666-f349-4545-9c0c-a564a844054f" (UID: "347c5666-f349-4545-9c0c-a564a844054f"). InnerVolumeSpecName "kube-api-access-mnmql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.015508 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "347c5666-f349-4545-9c0c-a564a844054f" (UID: "347c5666-f349-4545-9c0c-a564a844054f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.019529 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-config-data" (OuterVolumeSpecName: "config-data") pod "347c5666-f349-4545-9c0c-a564a844054f" (UID: "347c5666-f349-4545-9c0c-a564a844054f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.090562 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.090598 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnmql\" (UniqueName: \"kubernetes.io/projected/347c5666-f349-4545-9c0c-a564a844054f-kube-api-access-mnmql\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.090613 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/347c5666-f349-4545-9c0c-a564a844054f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226429 4835 generic.go:334] "Generic (PLEG): container finished" podID="347c5666-f349-4545-9c0c-a564a844054f" containerID="f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a" exitCode=0 Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226459 4835 generic.go:334] "Generic (PLEG): container finished" podID="347c5666-f349-4545-9c0c-a564a844054f" containerID="9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86" exitCode=143 Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226553 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"347c5666-f349-4545-9c0c-a564a844054f","Type":"ContainerDied","Data":"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a"} Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226651 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"347c5666-f349-4545-9c0c-a564a844054f","Type":"ContainerDied","Data":"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86"} Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226667 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"347c5666-f349-4545-9c0c-a564a844054f","Type":"ContainerDied","Data":"02aa9a6e0a4db4e5f30a4c44cb9d9a0e2e03619bd7f821f2fee453dfebe43ec9"} Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226687 4835 scope.go:117] "RemoveContainer" containerID="f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.226707 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrwpz" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="registry-server" containerID="cri-o://e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc" gracePeriod=2 Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.410017 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.432828 4835 scope.go:117] "RemoveContainer" containerID="9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.443130 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.443177 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.474995 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.490619 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:58 crc kubenswrapper[4835]: E0129 16:00:58.502127 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-metadata" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.502174 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-metadata" Jan 29 16:00:58 crc kubenswrapper[4835]: E0129 16:00:58.502241 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-log" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.502252 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-log" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.502529 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-metadata" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.502565 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="347c5666-f349-4545-9c0c-a564a844054f" containerName="nova-metadata-log" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.503982 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.508054 4835 scope.go:117] "RemoveContainer" containerID="f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.508703 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: E0129 16:00:58.510540 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a\": container with ID starting with f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a not found: ID does not exist" containerID="f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.510587 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a"} err="failed to get container status \"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a\": rpc error: code = NotFound desc = could not find container \"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a\": container with ID starting with f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a not found: ID does not exist" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.510616 4835 scope.go:117] "RemoveContainer" containerID="9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.516635 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:00:58 crc kubenswrapper[4835]: E0129 16:00:58.516956 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86\": container with ID starting with 9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86 not found: ID does not exist" containerID="9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.516987 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86"} err="failed to get container status \"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86\": rpc error: code = NotFound desc = could not find container \"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86\": container with ID starting with 9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86 not found: ID does not exist" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.517008 4835 scope.go:117] "RemoveContainer" containerID="f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.517513 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a"} err="failed to get container status \"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a\": rpc error: code = NotFound desc = could not find container \"f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a\": container with ID starting with f0c2f3f71427a90b40110f1e45dfe8f11e0c7ed5ae28d4d02a9d35d6ce0a639a not found: ID does not exist" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.517530 4835 scope.go:117] "RemoveContainer" containerID="9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.521106 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347c5666-f349-4545-9c0c-a564a844054f" path="/var/lib/kubelet/pods/347c5666-f349-4545-9c0c-a564a844054f/volumes" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.523693 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86"} err="failed to get container status \"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86\": rpc error: code = NotFound desc = could not find container \"9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86\": container with ID starting with 9086243e3f9f38dbb6f05a4ce5a774c79e4771263ddcc7bb1fc8cc9901913d86 not found: ID does not exist" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.524775 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.705810 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-config-data\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.705902 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.706966 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9269f00b-eb6d-481c-8b93-81aa0f929512-logs\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.707662 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.707770 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqshd\" (UniqueName: \"kubernetes.io/projected/9269f00b-eb6d-481c-8b93-81aa0f929512-kube-api-access-gqshd\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.754422 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.754544 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.787114 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.796975 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.810976 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws6hc\" (UniqueName: \"kubernetes.io/projected/9068c986-c447-405a-8620-069aa4d48469-kube-api-access-ws6hc\") pod \"9068c986-c447-405a-8620-069aa4d48469\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.811038 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-catalog-content\") pod \"9068c986-c447-405a-8620-069aa4d48469\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.811215 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-utilities\") pod \"9068c986-c447-405a-8620-069aa4d48469\" (UID: \"9068c986-c447-405a-8620-069aa4d48469\") " Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.811758 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9269f00b-eb6d-481c-8b93-81aa0f929512-logs\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.812255 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9269f00b-eb6d-481c-8b93-81aa0f929512-logs\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.812339 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.812439 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-utilities" (OuterVolumeSpecName: "utilities") pod "9068c986-c447-405a-8620-069aa4d48469" (UID: "9068c986-c447-405a-8620-069aa4d48469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.812787 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqshd\" (UniqueName: \"kubernetes.io/projected/9269f00b-eb6d-481c-8b93-81aa0f929512-kube-api-access-gqshd\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.812832 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-config-data\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.812884 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.813159 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.820636 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9068c986-c447-405a-8620-069aa4d48469-kube-api-access-ws6hc" (OuterVolumeSpecName: "kube-api-access-ws6hc") pod "9068c986-c447-405a-8620-069aa4d48469" (UID: "9068c986-c447-405a-8620-069aa4d48469"). InnerVolumeSpecName "kube-api-access-ws6hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.821458 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.822146 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.833232 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9068c986-c447-405a-8620-069aa4d48469" (UID: "9068c986-c447-405a-8620-069aa4d48469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.835867 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-config-data\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.846165 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqshd\" (UniqueName: \"kubernetes.io/projected/9269f00b-eb6d-481c-8b93-81aa0f929512-kube-api-access-gqshd\") pod \"nova-metadata-0\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " pod="openstack/nova-metadata-0" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.915226 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws6hc\" (UniqueName: \"kubernetes.io/projected/9068c986-c447-405a-8620-069aa4d48469-kube-api-access-ws6hc\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:58 crc kubenswrapper[4835]: I0129 16:00:58.915257 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9068c986-c447-405a-8620-069aa4d48469-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.144688 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.278594 4835 generic.go:334] "Generic (PLEG): container finished" podID="9068c986-c447-405a-8620-069aa4d48469" containerID="e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc" exitCode=0 Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.278659 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrwpz" event={"ID":"9068c986-c447-405a-8620-069aa4d48469","Type":"ContainerDied","Data":"e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc"} Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.279300 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrwpz" event={"ID":"9068c986-c447-405a-8620-069aa4d48469","Type":"ContainerDied","Data":"f648e47796506dc4efa9bcfac7e985f72bf1cb9c7dee7f715912ed9b800a5efa"} Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.279328 4835 scope.go:117] "RemoveContainer" containerID="e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.279566 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrwpz" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.348249 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.363598 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrwpz"] Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.364522 4835 scope.go:117] "RemoveContainer" containerID="f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.376446 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrwpz"] Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.455197 4835 scope.go:117] "RemoveContainer" containerID="dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.525693 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.525717 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.526375 4835 scope.go:117] "RemoveContainer" containerID="e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc" Jan 29 16:00:59 crc kubenswrapper[4835]: E0129 16:00:59.528222 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc\": container with ID starting with e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc not found: ID does not exist" containerID="e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.528261 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc"} err="failed to get container status \"e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc\": rpc error: code = NotFound desc = could not find container \"e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc\": container with ID starting with e332753640d7dc32379ba8c9969c011d4fb0befe18c517f08891d500742075fc not found: ID does not exist" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.528290 4835 scope.go:117] "RemoveContainer" containerID="f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a" Jan 29 16:00:59 crc kubenswrapper[4835]: E0129 16:00:59.529870 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a\": container with ID starting with f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a not found: ID does not exist" containerID="f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.529916 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a"} err="failed to get container status \"f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a\": rpc error: code = NotFound desc = could not find container \"f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a\": container with ID starting with f418caba2165af528152f9a338ec1437984a581a0835cdbae030e081f368830a not found: ID does not exist" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.529944 4835 scope.go:117] "RemoveContainer" containerID="dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f" Jan 29 16:00:59 crc kubenswrapper[4835]: E0129 16:00:59.530508 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f\": container with ID starting with dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f not found: ID does not exist" containerID="dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f" Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.530554 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f"} err="failed to get container status \"dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f\": rpc error: code = NotFound desc = could not find container \"dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f\": container with ID starting with dac046fcf047a4aab2c59794be028fae19d430f9c7441bd5dc48c4bda583b20f not found: ID does not exist" Jan 29 16:00:59 crc kubenswrapper[4835]: W0129 16:00:59.662780 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9269f00b_eb6d_481c_8b93_81aa0f929512.slice/crio-ed5507b230bb78a2e20fe47309008baf46ad10817539f333df7329bcbd7146fb WatchSource:0}: Error finding container ed5507b230bb78a2e20fe47309008baf46ad10817539f333df7329bcbd7146fb: Status 404 returned error can't find the container with id ed5507b230bb78a2e20fe47309008baf46ad10817539f333df7329bcbd7146fb Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.666734 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:00:59 crc kubenswrapper[4835]: I0129 16:00:59.855340 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.142511 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29495041-wrglx"] Jan 29 16:01:00 crc kubenswrapper[4835]: E0129 16:01:00.143360 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="registry-server" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.143380 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="registry-server" Jan 29 16:01:00 crc kubenswrapper[4835]: E0129 16:01:00.143404 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="extract-content" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.143411 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="extract-content" Jan 29 16:01:00 crc kubenswrapper[4835]: E0129 16:01:00.143439 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="extract-utilities" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.143446 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="extract-utilities" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.143703 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9068c986-c447-405a-8620-069aa4d48469" containerName="registry-server" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.146380 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.153804 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495041-wrglx"] Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.251820 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-config-data\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.251884 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-fernet-keys\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.252312 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-combined-ca-bundle\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.252637 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t47k\" (UniqueName: \"kubernetes.io/projected/84e0d72e-954f-4460-95ff-46e5f602c3c5-kube-api-access-2t47k\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.292116 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9269f00b-eb6d-481c-8b93-81aa0f929512","Type":"ContainerStarted","Data":"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073"} Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.292182 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9269f00b-eb6d-481c-8b93-81aa0f929512","Type":"ContainerStarted","Data":"ed5507b230bb78a2e20fe47309008baf46ad10817539f333df7329bcbd7146fb"} Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.297950 4835 generic.go:334] "Generic (PLEG): container finished" podID="28e05047-7318-4cd2-b46d-6f668be25f1d" containerID="8d18ece74a776d4b42d09242899d749d6612e084ed5e056932402fba3ffdfebc" exitCode=0 Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.298021 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6p7g" event={"ID":"28e05047-7318-4cd2-b46d-6f668be25f1d","Type":"ContainerDied","Data":"8d18ece74a776d4b42d09242899d749d6612e084ed5e056932402fba3ffdfebc"} Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.359398 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t47k\" (UniqueName: \"kubernetes.io/projected/84e0d72e-954f-4460-95ff-46e5f602c3c5-kube-api-access-2t47k\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.362992 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-config-data\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.363088 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-fernet-keys\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.363391 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-combined-ca-bundle\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.371203 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-combined-ca-bundle\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.371536 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-fernet-keys\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.378200 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-config-data\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.385428 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t47k\" (UniqueName: \"kubernetes.io/projected/84e0d72e-954f-4460-95ff-46e5f602c3c5-kube-api-access-2t47k\") pod \"keystone-cron-29495041-wrglx\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.473258 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.513410 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9068c986-c447-405a-8620-069aa4d48469" path="/var/lib/kubelet/pods/9068c986-c447-405a-8620-069aa4d48469/volumes" Jan 29 16:01:00 crc kubenswrapper[4835]: W0129 16:01:00.943979 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84e0d72e_954f_4460_95ff_46e5f602c3c5.slice/crio-2bdcd7aa56b1bee76d97829b9e807b267c5d090e7bd99999b372ee9d05b13d22 WatchSource:0}: Error finding container 2bdcd7aa56b1bee76d97829b9e807b267c5d090e7bd99999b372ee9d05b13d22: Status 404 returned error can't find the container with id 2bdcd7aa56b1bee76d97829b9e807b267c5d090e7bd99999b372ee9d05b13d22 Jan 29 16:01:00 crc kubenswrapper[4835]: I0129 16:01:00.956502 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29495041-wrglx"] Jan 29 16:01:01 crc kubenswrapper[4835]: I0129 16:01:01.310111 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9269f00b-eb6d-481c-8b93-81aa0f929512","Type":"ContainerStarted","Data":"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2"} Jan 29 16:01:01 crc kubenswrapper[4835]: I0129 16:01:01.314257 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-wrglx" event={"ID":"84e0d72e-954f-4460-95ff-46e5f602c3c5","Type":"ContainerStarted","Data":"fdbbcf4596261b258901f79039e70681333739ff1835fe063896f936a77e3fbb"} Jan 29 16:01:01 crc kubenswrapper[4835]: I0129 16:01:01.314294 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-wrglx" event={"ID":"84e0d72e-954f-4460-95ff-46e5f602c3c5","Type":"ContainerStarted","Data":"2bdcd7aa56b1bee76d97829b9e807b267c5d090e7bd99999b372ee9d05b13d22"} Jan 29 16:01:01 crc kubenswrapper[4835]: I0129 16:01:01.347030 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.347009628 podStartE2EDuration="3.347009628s" podCreationTimestamp="2026-01-29 16:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:01.328991593 +0000 UTC m=+1963.522035247" watchObservedRunningTime="2026-01-29 16:01:01.347009628 +0000 UTC m=+1963.540053282" Jan 29 16:01:01 crc kubenswrapper[4835]: I0129 16:01:01.352649 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29495041-wrglx" podStartSLOduration=1.35263343 podStartE2EDuration="1.35263343s" podCreationTimestamp="2026-01-29 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:01.344479334 +0000 UTC m=+1963.537522988" watchObservedRunningTime="2026-01-29 16:01:01.35263343 +0000 UTC m=+1963.545677084" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.055753 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.095901 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rrp\" (UniqueName: \"kubernetes.io/projected/28e05047-7318-4cd2-b46d-6f668be25f1d-kube-api-access-w5rrp\") pod \"28e05047-7318-4cd2-b46d-6f668be25f1d\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.095988 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-config-data\") pod \"28e05047-7318-4cd2-b46d-6f668be25f1d\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.096139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-combined-ca-bundle\") pod \"28e05047-7318-4cd2-b46d-6f668be25f1d\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.096309 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-scripts\") pod \"28e05047-7318-4cd2-b46d-6f668be25f1d\" (UID: \"28e05047-7318-4cd2-b46d-6f668be25f1d\") " Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.103327 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-scripts" (OuterVolumeSpecName: "scripts") pod "28e05047-7318-4cd2-b46d-6f668be25f1d" (UID: "28e05047-7318-4cd2-b46d-6f668be25f1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.121320 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e05047-7318-4cd2-b46d-6f668be25f1d-kube-api-access-w5rrp" (OuterVolumeSpecName: "kube-api-access-w5rrp") pod "28e05047-7318-4cd2-b46d-6f668be25f1d" (UID: "28e05047-7318-4cd2-b46d-6f668be25f1d"). InnerVolumeSpecName "kube-api-access-w5rrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.137882 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28e05047-7318-4cd2-b46d-6f668be25f1d" (UID: "28e05047-7318-4cd2-b46d-6f668be25f1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.138334 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-config-data" (OuterVolumeSpecName: "config-data") pod "28e05047-7318-4cd2-b46d-6f668be25f1d" (UID: "28e05047-7318-4cd2-b46d-6f668be25f1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.198819 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5rrp\" (UniqueName: \"kubernetes.io/projected/28e05047-7318-4cd2-b46d-6f668be25f1d-kube-api-access-w5rrp\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.198859 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.198876 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.198891 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e05047-7318-4cd2-b46d-6f668be25f1d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.330265 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m6p7g" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.330716 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m6p7g" event={"ID":"28e05047-7318-4cd2-b46d-6f668be25f1d","Type":"ContainerDied","Data":"6cdfa51c055c1ac3f16efa9880da6a27fa09053fd6469d80130296c4ddc800b5"} Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.330751 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cdfa51c055c1ac3f16efa9880da6a27fa09053fd6469d80130296c4ddc800b5" Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.479527 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.480141 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-log" containerID="cri-o://352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e" gracePeriod=30 Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.480220 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-api" containerID="cri-o://8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211" gracePeriod=30 Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.505407 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.505657 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="342971ff-33d6-459e-9767-7950c42be1fd" containerName="nova-scheduler-scheduler" containerID="cri-o://e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6" gracePeriod=30 Jan 29 16:01:02 crc kubenswrapper[4835]: I0129 16:01:02.516049 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.341627 4835 generic.go:334] "Generic (PLEG): container finished" podID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerID="352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e" exitCode=143 Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.341726 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ede2cd52-c131-4cfa-8bd2-415d45e4276b","Type":"ContainerDied","Data":"352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e"} Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.341855 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-log" containerID="cri-o://1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073" gracePeriod=30 Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.341948 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-metadata" containerID="cri-o://37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2" gracePeriod=30 Jan 29 16:01:03 crc kubenswrapper[4835]: E0129 16:01:03.757422 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:01:03 crc kubenswrapper[4835]: E0129 16:01:03.763254 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:01:03 crc kubenswrapper[4835]: E0129 16:01:03.764715 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:01:03 crc kubenswrapper[4835]: E0129 16:01:03.764776 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="342971ff-33d6-459e-9767-7950c42be1fd" containerName="nova-scheduler-scheduler" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.921559 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.928017 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9269f00b-eb6d-481c-8b93-81aa0f929512-logs\") pod \"9269f00b-eb6d-481c-8b93-81aa0f929512\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.928178 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-config-data\") pod \"9269f00b-eb6d-481c-8b93-81aa0f929512\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.928267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqshd\" (UniqueName: \"kubernetes.io/projected/9269f00b-eb6d-481c-8b93-81aa0f929512-kube-api-access-gqshd\") pod \"9269f00b-eb6d-481c-8b93-81aa0f929512\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.928302 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-nova-metadata-tls-certs\") pod \"9269f00b-eb6d-481c-8b93-81aa0f929512\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.928317 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-combined-ca-bundle\") pod \"9269f00b-eb6d-481c-8b93-81aa0f929512\" (UID: \"9269f00b-eb6d-481c-8b93-81aa0f929512\") " Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.928453 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9269f00b-eb6d-481c-8b93-81aa0f929512-logs" (OuterVolumeSpecName: "logs") pod "9269f00b-eb6d-481c-8b93-81aa0f929512" (UID: "9269f00b-eb6d-481c-8b93-81aa0f929512"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.929784 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9269f00b-eb6d-481c-8b93-81aa0f929512-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.935082 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9269f00b-eb6d-481c-8b93-81aa0f929512-kube-api-access-gqshd" (OuterVolumeSpecName: "kube-api-access-gqshd") pod "9269f00b-eb6d-481c-8b93-81aa0f929512" (UID: "9269f00b-eb6d-481c-8b93-81aa0f929512"). InnerVolumeSpecName "kube-api-access-gqshd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.961866 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9269f00b-eb6d-481c-8b93-81aa0f929512" (UID: "9269f00b-eb6d-481c-8b93-81aa0f929512"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.968360 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-config-data" (OuterVolumeSpecName: "config-data") pod "9269f00b-eb6d-481c-8b93-81aa0f929512" (UID: "9269f00b-eb6d-481c-8b93-81aa0f929512"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:03 crc kubenswrapper[4835]: I0129 16:01:03.992297 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9269f00b-eb6d-481c-8b93-81aa0f929512" (UID: "9269f00b-eb6d-481c-8b93-81aa0f929512"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.030538 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.030769 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqshd\" (UniqueName: \"kubernetes.io/projected/9269f00b-eb6d-481c-8b93-81aa0f929512-kube-api-access-gqshd\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.030857 4835 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.030937 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9269f00b-eb6d-481c-8b93-81aa0f929512-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.270104 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.363853 4835 generic.go:334] "Generic (PLEG): container finished" podID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerID="37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2" exitCode=0 Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.363895 4835 generic.go:334] "Generic (PLEG): container finished" podID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerID="1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073" exitCode=143 Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.363956 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9269f00b-eb6d-481c-8b93-81aa0f929512","Type":"ContainerDied","Data":"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2"} Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.363990 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9269f00b-eb6d-481c-8b93-81aa0f929512","Type":"ContainerDied","Data":"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073"} Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.364005 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9269f00b-eb6d-481c-8b93-81aa0f929512","Type":"ContainerDied","Data":"ed5507b230bb78a2e20fe47309008baf46ad10817539f333df7329bcbd7146fb"} Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.364025 4835 scope.go:117] "RemoveContainer" containerID="37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.364167 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.367600 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4f5fc4f-p8jsz"] Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.368024 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerName="dnsmasq-dns" containerID="cri-o://a66043698a03e5f4a0e41f575e68579bd9553912fae8d7306dee73d253a53b47" gracePeriod=10 Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.387317 4835 generic.go:334] "Generic (PLEG): container finished" podID="84e0d72e-954f-4460-95ff-46e5f602c3c5" containerID="fdbbcf4596261b258901f79039e70681333739ff1835fe063896f936a77e3fbb" exitCode=0 Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.387371 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-wrglx" event={"ID":"84e0d72e-954f-4460-95ff-46e5f602c3c5","Type":"ContainerDied","Data":"fdbbcf4596261b258901f79039e70681333739ff1835fe063896f936a77e3fbb"} Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.419540 4835 scope.go:117] "RemoveContainer" containerID="1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.458627 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.471791 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.503512 4835 scope.go:117] "RemoveContainer" containerID="37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.505554 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" path="/var/lib/kubelet/pods/9269f00b-eb6d-481c-8b93-81aa0f929512/volumes" Jan 29 16:01:04 crc kubenswrapper[4835]: E0129 16:01:04.507020 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2\": container with ID starting with 37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2 not found: ID does not exist" containerID="37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.507053 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2"} err="failed to get container status \"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2\": rpc error: code = NotFound desc = could not find container \"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2\": container with ID starting with 37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2 not found: ID does not exist" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.507076 4835 scope.go:117] "RemoveContainer" containerID="1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073" Jan 29 16:01:04 crc kubenswrapper[4835]: E0129 16:01:04.507518 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073\": container with ID starting with 1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073 not found: ID does not exist" containerID="1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.507565 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073"} err="failed to get container status \"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073\": rpc error: code = NotFound desc = could not find container \"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073\": container with ID starting with 1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073 not found: ID does not exist" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.507592 4835 scope.go:117] "RemoveContainer" containerID="37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.512635 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2"} err="failed to get container status \"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2\": rpc error: code = NotFound desc = could not find container \"37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2\": container with ID starting with 37fa7300794cc48212135b6b57ce5a1ae9cfe378db7300fcc26784ab30cd06d2 not found: ID does not exist" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.512686 4835 scope.go:117] "RemoveContainer" containerID="1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.514120 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073"} err="failed to get container status \"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073\": rpc error: code = NotFound desc = could not find container \"1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073\": container with ID starting with 1b7018e0cc7a94706b9fbb23b4d426d15d8ca25bea36536078f35e0efd97a073 not found: ID does not exist" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.520373 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:04 crc kubenswrapper[4835]: E0129 16:01:04.520821 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-log" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.520839 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-log" Jan 29 16:01:04 crc kubenswrapper[4835]: E0129 16:01:04.520873 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e05047-7318-4cd2-b46d-6f668be25f1d" containerName="nova-manage" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.520879 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e05047-7318-4cd2-b46d-6f668be25f1d" containerName="nova-manage" Jan 29 16:01:04 crc kubenswrapper[4835]: E0129 16:01:04.520900 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-metadata" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.520906 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-metadata" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.521071 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-log" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.521086 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e05047-7318-4cd2-b46d-6f668be25f1d" containerName="nova-manage" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.521114 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9269f00b-eb6d-481c-8b93-81aa0f929512" containerName="nova-metadata-metadata" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.522321 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.524099 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.524176 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.531386 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.546249 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.546331 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-config-data\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.546419 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67df9f26-b1b6-450a-94c6-59928c23c485-logs\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.546501 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.546535 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vng\" (UniqueName: \"kubernetes.io/projected/67df9f26-b1b6-450a-94c6-59928c23c485-kube-api-access-98vng\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.648806 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67df9f26-b1b6-450a-94c6-59928c23c485-logs\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.648892 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.648939 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98vng\" (UniqueName: \"kubernetes.io/projected/67df9f26-b1b6-450a-94c6-59928c23c485-kube-api-access-98vng\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.649036 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.649085 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-config-data\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.649968 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67df9f26-b1b6-450a-94c6-59928c23c485-logs\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.654107 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.656012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.656543 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-config-data\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.671460 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98vng\" (UniqueName: \"kubernetes.io/projected/67df9f26-b1b6-450a-94c6-59928c23c485-kube-api-access-98vng\") pod \"nova-metadata-0\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " pod="openstack/nova-metadata-0" Jan 29 16:01:04 crc kubenswrapper[4835]: I0129 16:01:04.911188 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.375307 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:05 crc kubenswrapper[4835]: W0129 16:01:05.407227 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67df9f26_b1b6_450a_94c6_59928c23c485.slice/crio-91fdc65319d398b52643785d1b929bf4d073f88e4bb95af752cb93315dc01e72 WatchSource:0}: Error finding container 91fdc65319d398b52643785d1b929bf4d073f88e4bb95af752cb93315dc01e72: Status 404 returned error can't find the container with id 91fdc65319d398b52643785d1b929bf4d073f88e4bb95af752cb93315dc01e72 Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.408025 4835 generic.go:334] "Generic (PLEG): container finished" podID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerID="a66043698a03e5f4a0e41f575e68579bd9553912fae8d7306dee73d253a53b47" exitCode=0 Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.408082 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" event={"ID":"4d6ac59c-3290-44c5-a7fd-7f878afe393b","Type":"ContainerDied","Data":"a66043698a03e5f4a0e41f575e68579bd9553912fae8d7306dee73d253a53b47"} Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.409595 4835 generic.go:334] "Generic (PLEG): container finished" podID="342971ff-33d6-459e-9767-7950c42be1fd" containerID="e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6" exitCode=0 Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.409755 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342971ff-33d6-459e-9767-7950c42be1fd","Type":"ContainerDied","Data":"e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6"} Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.460698 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.567946 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-svc\") pod \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.568010 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-sb\") pod \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.568117 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-config\") pod \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.568147 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-swift-storage-0\") pod \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.568178 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qtvp\" (UniqueName: \"kubernetes.io/projected/4d6ac59c-3290-44c5-a7fd-7f878afe393b-kube-api-access-7qtvp\") pod \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.568216 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-nb\") pod \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\" (UID: \"4d6ac59c-3290-44c5-a7fd-7f878afe393b\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.595423 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6ac59c-3290-44c5-a7fd-7f878afe393b-kube-api-access-7qtvp" (OuterVolumeSpecName: "kube-api-access-7qtvp") pod "4d6ac59c-3290-44c5-a7fd-7f878afe393b" (UID: "4d6ac59c-3290-44c5-a7fd-7f878afe393b"). InnerVolumeSpecName "kube-api-access-7qtvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.656332 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d6ac59c-3290-44c5-a7fd-7f878afe393b" (UID: "4d6ac59c-3290-44c5-a7fd-7f878afe393b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.659189 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d6ac59c-3290-44c5-a7fd-7f878afe393b" (UID: "4d6ac59c-3290-44c5-a7fd-7f878afe393b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.663104 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d6ac59c-3290-44c5-a7fd-7f878afe393b" (UID: "4d6ac59c-3290-44c5-a7fd-7f878afe393b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.670994 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.671057 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.671069 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qtvp\" (UniqueName: \"kubernetes.io/projected/4d6ac59c-3290-44c5-a7fd-7f878afe393b-kube-api-access-7qtvp\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.671077 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.678290 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-config" (OuterVolumeSpecName: "config") pod "4d6ac59c-3290-44c5-a7fd-7f878afe393b" (UID: "4d6ac59c-3290-44c5-a7fd-7f878afe393b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.681624 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d6ac59c-3290-44c5-a7fd-7f878afe393b" (UID: "4d6ac59c-3290-44c5-a7fd-7f878afe393b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.772445 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.772484 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d6ac59c-3290-44c5-a7fd-7f878afe393b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.915701 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.928866 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978081 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-combined-ca-bundle\") pod \"342971ff-33d6-459e-9767-7950c42be1fd\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978184 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-fernet-keys\") pod \"84e0d72e-954f-4460-95ff-46e5f602c3c5\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p25xq\" (UniqueName: \"kubernetes.io/projected/342971ff-33d6-459e-9767-7950c42be1fd-kube-api-access-p25xq\") pod \"342971ff-33d6-459e-9767-7950c42be1fd\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978273 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-combined-ca-bundle\") pod \"84e0d72e-954f-4460-95ff-46e5f602c3c5\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978451 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-config-data\") pod \"84e0d72e-954f-4460-95ff-46e5f602c3c5\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978491 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-config-data\") pod \"342971ff-33d6-459e-9767-7950c42be1fd\" (UID: \"342971ff-33d6-459e-9767-7950c42be1fd\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.978515 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t47k\" (UniqueName: \"kubernetes.io/projected/84e0d72e-954f-4460-95ff-46e5f602c3c5-kube-api-access-2t47k\") pod \"84e0d72e-954f-4460-95ff-46e5f602c3c5\" (UID: \"84e0d72e-954f-4460-95ff-46e5f602c3c5\") " Jan 29 16:01:05 crc kubenswrapper[4835]: I0129 16:01:05.994693 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/342971ff-33d6-459e-9767-7950c42be1fd-kube-api-access-p25xq" (OuterVolumeSpecName: "kube-api-access-p25xq") pod "342971ff-33d6-459e-9767-7950c42be1fd" (UID: "342971ff-33d6-459e-9767-7950c42be1fd"). InnerVolumeSpecName "kube-api-access-p25xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.017400 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "84e0d72e-954f-4460-95ff-46e5f602c3c5" (UID: "84e0d72e-954f-4460-95ff-46e5f602c3c5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.033757 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e0d72e-954f-4460-95ff-46e5f602c3c5-kube-api-access-2t47k" (OuterVolumeSpecName: "kube-api-access-2t47k") pod "84e0d72e-954f-4460-95ff-46e5f602c3c5" (UID: "84e0d72e-954f-4460-95ff-46e5f602c3c5"). InnerVolumeSpecName "kube-api-access-2t47k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.080765 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.080806 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p25xq\" (UniqueName: \"kubernetes.io/projected/342971ff-33d6-459e-9767-7950c42be1fd-kube-api-access-p25xq\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.080820 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t47k\" (UniqueName: \"kubernetes.io/projected/84e0d72e-954f-4460-95ff-46e5f602c3c5-kube-api-access-2t47k\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.085502 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "342971ff-33d6-459e-9767-7950c42be1fd" (UID: "342971ff-33d6-459e-9767-7950c42be1fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.097709 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-config-data" (OuterVolumeSpecName: "config-data") pod "342971ff-33d6-459e-9767-7950c42be1fd" (UID: "342971ff-33d6-459e-9767-7950c42be1fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.108642 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-config-data" (OuterVolumeSpecName: "config-data") pod "84e0d72e-954f-4460-95ff-46e5f602c3c5" (UID: "84e0d72e-954f-4460-95ff-46e5f602c3c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.160858 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.160856 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84e0d72e-954f-4460-95ff-46e5f602c3c5" (UID: "84e0d72e-954f-4460-95ff-46e5f602c3c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.182250 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-config-data\") pod \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.184825 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85rx2\" (UniqueName: \"kubernetes.io/projected/ede2cd52-c131-4cfa-8bd2-415d45e4276b-kube-api-access-85rx2\") pod \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.185149 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-combined-ca-bundle\") pod \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.185370 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede2cd52-c131-4cfa-8bd2-415d45e4276b-logs\") pod \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\" (UID: \"ede2cd52-c131-4cfa-8bd2-415d45e4276b\") " Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.186175 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.186261 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.186331 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e0d72e-954f-4460-95ff-46e5f602c3c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.186390 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/342971ff-33d6-459e-9767-7950c42be1fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.186669 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede2cd52-c131-4cfa-8bd2-415d45e4276b-logs" (OuterVolumeSpecName: "logs") pod "ede2cd52-c131-4cfa-8bd2-415d45e4276b" (UID: "ede2cd52-c131-4cfa-8bd2-415d45e4276b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.190882 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede2cd52-c131-4cfa-8bd2-415d45e4276b-kube-api-access-85rx2" (OuterVolumeSpecName: "kube-api-access-85rx2") pod "ede2cd52-c131-4cfa-8bd2-415d45e4276b" (UID: "ede2cd52-c131-4cfa-8bd2-415d45e4276b"). InnerVolumeSpecName "kube-api-access-85rx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.222273 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ede2cd52-c131-4cfa-8bd2-415d45e4276b" (UID: "ede2cd52-c131-4cfa-8bd2-415d45e4276b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.239990 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-config-data" (OuterVolumeSpecName: "config-data") pod "ede2cd52-c131-4cfa-8bd2-415d45e4276b" (UID: "ede2cd52-c131-4cfa-8bd2-415d45e4276b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.288135 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ede2cd52-c131-4cfa-8bd2-415d45e4276b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.288168 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.288179 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85rx2\" (UniqueName: \"kubernetes.io/projected/ede2cd52-c131-4cfa-8bd2-415d45e4276b-kube-api-access-85rx2\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.288190 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede2cd52-c131-4cfa-8bd2-415d45e4276b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.420201 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"342971ff-33d6-459e-9767-7950c42be1fd","Type":"ContainerDied","Data":"9baf00ae3740a939f4eb9af4e8f9c1545605c5db21e43fff9815d45907f6018b"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.420252 4835 scope.go:117] "RemoveContainer" containerID="e2837efcb7bcd26cd17137a9922f10ab77200f81965a56c8d85fd1cf6d013fb6" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.420365 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.423348 4835 generic.go:334] "Generic (PLEG): container finished" podID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerID="8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211" exitCode=0 Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.423422 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ede2cd52-c131-4cfa-8bd2-415d45e4276b","Type":"ContainerDied","Data":"8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.423581 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ede2cd52-c131-4cfa-8bd2-415d45e4276b","Type":"ContainerDied","Data":"a821d71c095c631704b04c1e8047d7e8d18b363afc838fb5e91d675bf760ad80"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.423419 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.425447 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29495041-wrglx" event={"ID":"84e0d72e-954f-4460-95ff-46e5f602c3c5","Type":"ContainerDied","Data":"2bdcd7aa56b1bee76d97829b9e807b267c5d090e7bd99999b372ee9d05b13d22"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.425615 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bdcd7aa56b1bee76d97829b9e807b267c5d090e7bd99999b372ee9d05b13d22" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.425712 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29495041-wrglx" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.443823 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" event={"ID":"4d6ac59c-3290-44c5-a7fd-7f878afe393b","Type":"ContainerDied","Data":"4951f518eb6c138c3cdd63c743f9ecbc0b4313e322eb16a5b5c75e76a78c01d2"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.443910 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b4f5fc4f-p8jsz" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.455359 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67df9f26-b1b6-450a-94c6-59928c23c485","Type":"ContainerStarted","Data":"52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.455408 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67df9f26-b1b6-450a-94c6-59928c23c485","Type":"ContainerStarted","Data":"b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.455422 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67df9f26-b1b6-450a-94c6-59928c23c485","Type":"ContainerStarted","Data":"91fdc65319d398b52643785d1b929bf4d073f88e4bb95af752cb93315dc01e72"} Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.471940 4835 scope.go:117] "RemoveContainer" containerID="8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.519890 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.533926 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.535336 4835 scope.go:117] "RemoveContainer" containerID="352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.577288 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587003 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.587586 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-api" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587605 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-api" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.587628 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerName="init" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587639 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerName="init" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.587661 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="342971ff-33d6-459e-9767-7950c42be1fd" containerName="nova-scheduler-scheduler" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587670 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="342971ff-33d6-459e-9767-7950c42be1fd" containerName="nova-scheduler-scheduler" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.587689 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerName="dnsmasq-dns" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587697 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerName="dnsmasq-dns" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.587710 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-log" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587718 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-log" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.587731 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e0d72e-954f-4460-95ff-46e5f602c3c5" containerName="keystone-cron" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587738 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e0d72e-954f-4460-95ff-46e5f602c3c5" containerName="keystone-cron" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.587986 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-log" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.588002 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" containerName="dnsmasq-dns" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.588018 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e0d72e-954f-4460-95ff-46e5f602c3c5" containerName="keystone-cron" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.588040 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="342971ff-33d6-459e-9767-7950c42be1fd" containerName="nova-scheduler-scheduler" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.588050 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" containerName="nova-api-api" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.588884 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.591241 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.613720 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.618063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-config-data\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.618127 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7gm\" (UniqueName: \"kubernetes.io/projected/b1fa4a24-c047-4ce8-9120-5d23671f7937-kube-api-access-cp7gm\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.618336 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.629844 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.644108 4835 scope.go:117] "RemoveContainer" containerID="8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.644742 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211\": container with ID starting with 8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211 not found: ID does not exist" containerID="8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.644787 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211"} err="failed to get container status \"8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211\": rpc error: code = NotFound desc = could not find container \"8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211\": container with ID starting with 8e8a90dcb8885fc721ebc755feda1dd29a672172ee2ed3fbf3e1ca46a51c7211 not found: ID does not exist" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.644813 4835 scope.go:117] "RemoveContainer" containerID="352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e" Jan 29 16:01:06 crc kubenswrapper[4835]: E0129 16:01:06.648816 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e\": container with ID starting with 352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e not found: ID does not exist" containerID="352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.648878 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e"} err="failed to get container status \"352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e\": rpc error: code = NotFound desc = could not find container \"352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e\": container with ID starting with 352b25115eabd26a71636c12f601d1124ff550fb85e224eacf9840b896d0895e not found: ID does not exist" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.648912 4835 scope.go:117] "RemoveContainer" containerID="a66043698a03e5f4a0e41f575e68579bd9553912fae8d7306dee73d253a53b47" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.649802 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b4f5fc4f-p8jsz"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.665691 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.668019 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.674443 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.675099 4835 scope.go:117] "RemoveContainer" containerID="16c6865f3a5dfa7e557676d74d372f2b574d3c8667259bc047a23c592eb39f97" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.685864 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b4f5fc4f-p8jsz"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.698904 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.707852 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.707833913 podStartE2EDuration="2.707833913s" podCreationTimestamp="2026-01-29 16:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:06.560989445 +0000 UTC m=+1968.754033109" watchObservedRunningTime="2026-01-29 16:01:06.707833913 +0000 UTC m=+1968.900877557" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720370 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgmv\" (UniqueName: \"kubernetes.io/projected/b38b2b8d-04e6-4471-9110-4334f8112a6b-kube-api-access-rbgmv\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720450 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b38b2b8d-04e6-4471-9110-4334f8112a6b-logs\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720544 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720614 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-config-data\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720642 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-config-data\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720685 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp7gm\" (UniqueName: \"kubernetes.io/projected/b1fa4a24-c047-4ce8-9120-5d23671f7937-kube-api-access-cp7gm\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.720715 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.727097 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-config-data\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.733296 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.738032 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp7gm\" (UniqueName: \"kubernetes.io/projected/b1fa4a24-c047-4ce8-9120-5d23671f7937-kube-api-access-cp7gm\") pod \"nova-scheduler-0\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.822931 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbgmv\" (UniqueName: \"kubernetes.io/projected/b38b2b8d-04e6-4471-9110-4334f8112a6b-kube-api-access-rbgmv\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.822991 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b38b2b8d-04e6-4471-9110-4334f8112a6b-logs\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.823053 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-config-data\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.823092 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.823792 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b38b2b8d-04e6-4471-9110-4334f8112a6b-logs\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.827547 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.829265 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-config-data\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.839420 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbgmv\" (UniqueName: \"kubernetes.io/projected/b38b2b8d-04e6-4471-9110-4334f8112a6b-kube-api-access-rbgmv\") pod \"nova-api-0\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " pod="openstack/nova-api-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.944948 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:06 crc kubenswrapper[4835]: I0129 16:01:06.990877 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:07 crc kubenswrapper[4835]: W0129 16:01:07.400574 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1fa4a24_c047_4ce8_9120_5d23671f7937.slice/crio-58f75e4085bac0afe730d9ccdc39850b42a1d8f769ce946a3b5e7f4b42c9e904 WatchSource:0}: Error finding container 58f75e4085bac0afe730d9ccdc39850b42a1d8f769ce946a3b5e7f4b42c9e904: Status 404 returned error can't find the container with id 58f75e4085bac0afe730d9ccdc39850b42a1d8f769ce946a3b5e7f4b42c9e904 Jan 29 16:01:07 crc kubenswrapper[4835]: I0129 16:01:07.401485 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:07 crc kubenswrapper[4835]: I0129 16:01:07.476122 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1fa4a24-c047-4ce8-9120-5d23671f7937","Type":"ContainerStarted","Data":"58f75e4085bac0afe730d9ccdc39850b42a1d8f769ce946a3b5e7f4b42c9e904"} Jan 29 16:01:07 crc kubenswrapper[4835]: W0129 16:01:07.494559 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb38b2b8d_04e6_4471_9110_4334f8112a6b.slice/crio-d73897f3fb1729dff5a469f26577f288b3681b327a68283dff18a164a01771e2 WatchSource:0}: Error finding container d73897f3fb1729dff5a469f26577f288b3681b327a68283dff18a164a01771e2: Status 404 returned error can't find the container with id d73897f3fb1729dff5a469f26577f288b3681b327a68283dff18a164a01771e2 Jan 29 16:01:07 crc kubenswrapper[4835]: I0129 16:01:07.498256 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.488446 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b38b2b8d-04e6-4471-9110-4334f8112a6b","Type":"ContainerStarted","Data":"d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e"} Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.489118 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b38b2b8d-04e6-4471-9110-4334f8112a6b","Type":"ContainerStarted","Data":"91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690"} Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.489141 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b38b2b8d-04e6-4471-9110-4334f8112a6b","Type":"ContainerStarted","Data":"d73897f3fb1729dff5a469f26577f288b3681b327a68283dff18a164a01771e2"} Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.505699 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="342971ff-33d6-459e-9767-7950c42be1fd" path="/var/lib/kubelet/pods/342971ff-33d6-459e-9767-7950c42be1fd/volumes" Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.507209 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d6ac59c-3290-44c5-a7fd-7f878afe393b" path="/var/lib/kubelet/pods/4d6ac59c-3290-44c5-a7fd-7f878afe393b/volumes" Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.508170 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede2cd52-c131-4cfa-8bd2-415d45e4276b" path="/var/lib/kubelet/pods/ede2cd52-c131-4cfa-8bd2-415d45e4276b/volumes" Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.508787 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1fa4a24-c047-4ce8-9120-5d23671f7937","Type":"ContainerStarted","Data":"f44c09514405a8d86cee1df1cacbbe2f6446d3cd016435d7056fa9ce060a5c34"} Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.510382 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5103553979999997 podStartE2EDuration="2.510355398s" podCreationTimestamp="2026-01-29 16:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:08.509091457 +0000 UTC m=+1970.702135111" watchObservedRunningTime="2026-01-29 16:01:08.510355398 +0000 UTC m=+1970.703399082" Jan 29 16:01:08 crc kubenswrapper[4835]: I0129 16:01:08.540596 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.540575972 podStartE2EDuration="2.540575972s" podCreationTimestamp="2026-01-29 16:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:08.533082332 +0000 UTC m=+1970.726125986" watchObservedRunningTime="2026-01-29 16:01:08.540575972 +0000 UTC m=+1970.733619636" Jan 29 16:01:09 crc kubenswrapper[4835]: I0129 16:01:09.912054 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:01:09 crc kubenswrapper[4835]: I0129 16:01:09.912389 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:01:10 crc kubenswrapper[4835]: I0129 16:01:10.519406 4835 generic.go:334] "Generic (PLEG): container finished" podID="b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" containerID="1fcaed86489c7aecce529375985bed09cd24f7a06d51eb4a85cd96ac9381dfeb" exitCode=0 Jan 29 16:01:10 crc kubenswrapper[4835]: I0129 16:01:10.519449 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" event={"ID":"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89","Type":"ContainerDied","Data":"1fcaed86489c7aecce529375985bed09cd24f7a06d51eb4a85cd96ac9381dfeb"} Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.840244 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.919549 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjstr\" (UniqueName: \"kubernetes.io/projected/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-kube-api-access-kjstr\") pod \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.919714 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-config-data\") pod \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.919760 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-combined-ca-bundle\") pod \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.919885 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-scripts\") pod \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\" (UID: \"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89\") " Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.924739 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-scripts" (OuterVolumeSpecName: "scripts") pod "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" (UID: "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.925140 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-kube-api-access-kjstr" (OuterVolumeSpecName: "kube-api-access-kjstr") pod "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" (UID: "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89"). InnerVolumeSpecName "kube-api-access-kjstr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.945564 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.945847 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-config-data" (OuterVolumeSpecName: "config-data") pod "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" (UID: "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:11.947714 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" (UID: "b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.022247 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.022541 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.022561 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.022569 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjstr\" (UniqueName: \"kubernetes.io/projected/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89-kube-api-access-kjstr\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.537959 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" event={"ID":"b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89","Type":"ContainerDied","Data":"b229f53d3499664a57d14afcde93f6a20bef648188b3b747ec10c9a6309abb93"} Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.538002 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b229f53d3499664a57d14afcde93f6a20bef648188b3b747ec10c9a6309abb93" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.538059 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhhrj" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.617246 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:01:13 crc kubenswrapper[4835]: E0129 16:01:12.617739 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" containerName="nova-cell1-conductor-db-sync" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.617753 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" containerName="nova-cell1-conductor-db-sync" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.617973 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" containerName="nova-cell1-conductor-db-sync" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.618679 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.622375 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.629284 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.737406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.737502 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.737868 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7gw\" (UniqueName: \"kubernetes.io/projected/36560cf0-b011-4fde-8cc0-00d63f2a141a-kube-api-access-mg7gw\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.839551 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7gw\" (UniqueName: \"kubernetes.io/projected/36560cf0-b011-4fde-8cc0-00d63f2a141a-kube-api-access-mg7gw\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.839644 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.839690 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.844374 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.844424 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.859688 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7gw\" (UniqueName: \"kubernetes.io/projected/36560cf0-b011-4fde-8cc0-00d63f2a141a-kube-api-access-mg7gw\") pod \"nova-cell1-conductor-0\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:12.945669 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:13 crc kubenswrapper[4835]: I0129 16:01:13.587610 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:01:14 crc kubenswrapper[4835]: I0129 16:01:14.556221 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"36560cf0-b011-4fde-8cc0-00d63f2a141a","Type":"ContainerStarted","Data":"3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8"} Jan 29 16:01:14 crc kubenswrapper[4835]: I0129 16:01:14.556876 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:14 crc kubenswrapper[4835]: I0129 16:01:14.557320 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"36560cf0-b011-4fde-8cc0-00d63f2a141a","Type":"ContainerStarted","Data":"a1e4f2e0924bcc0323e62eb442dd1a9e356161418aa5ab15239b3710d1c32d1f"} Jan 29 16:01:14 crc kubenswrapper[4835]: I0129 16:01:14.571650 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.571627334 podStartE2EDuration="2.571627334s" podCreationTimestamp="2026-01-29 16:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:14.570076505 +0000 UTC m=+1976.763120169" watchObservedRunningTime="2026-01-29 16:01:14.571627334 +0000 UTC m=+1976.764670988" Jan 29 16:01:14 crc kubenswrapper[4835]: I0129 16:01:14.911441 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:01:14 crc kubenswrapper[4835]: I0129 16:01:14.911503 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:01:15 crc kubenswrapper[4835]: I0129 16:01:15.923648 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:01:15 crc kubenswrapper[4835]: I0129 16:01:15.923900 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:01:16 crc kubenswrapper[4835]: I0129 16:01:16.945682 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:01:16 crc kubenswrapper[4835]: I0129 16:01:16.975770 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:01:16 crc kubenswrapper[4835]: I0129 16:01:16.993315 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:01:16 crc kubenswrapper[4835]: I0129 16:01:16.993425 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:01:17 crc kubenswrapper[4835]: I0129 16:01:17.640060 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:01:18 crc kubenswrapper[4835]: I0129 16:01:18.074793 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:01:18 crc kubenswrapper[4835]: I0129 16:01:18.074835 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.203:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 16:01:22 crc kubenswrapper[4835]: I0129 16:01:22.981260 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 16:01:23 crc kubenswrapper[4835]: I0129 16:01:23.129135 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:01:24 crc kubenswrapper[4835]: I0129 16:01:24.917912 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:01:24 crc kubenswrapper[4835]: I0129 16:01:24.918363 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:01:24 crc kubenswrapper[4835]: I0129 16:01:24.922189 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:01:25 crc kubenswrapper[4835]: I0129 16:01:25.704976 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:01:26 crc kubenswrapper[4835]: I0129 16:01:26.667284 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:01:26 crc kubenswrapper[4835]: I0129 16:01:26.667798 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="28230ab4-6e4e-497e-9c31-22ff91eba741" containerName="kube-state-metrics" containerID="cri-o://8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff" gracePeriod=30 Jan 29 16:01:26 crc kubenswrapper[4835]: I0129 16:01:26.994942 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:01:26 crc kubenswrapper[4835]: I0129 16:01:26.996289 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:01:26 crc kubenswrapper[4835]: I0129 16:01:26.996529 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.003892 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.140044 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.212496 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzxg8\" (UniqueName: \"kubernetes.io/projected/28230ab4-6e4e-497e-9c31-22ff91eba741-kube-api-access-mzxg8\") pod \"28230ab4-6e4e-497e-9c31-22ff91eba741\" (UID: \"28230ab4-6e4e-497e-9c31-22ff91eba741\") " Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.219443 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28230ab4-6e4e-497e-9c31-22ff91eba741-kube-api-access-mzxg8" (OuterVolumeSpecName: "kube-api-access-mzxg8") pod "28230ab4-6e4e-497e-9c31-22ff91eba741" (UID: "28230ab4-6e4e-497e-9c31-22ff91eba741"). InnerVolumeSpecName "kube-api-access-mzxg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.319407 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzxg8\" (UniqueName: \"kubernetes.io/projected/28230ab4-6e4e-497e-9c31-22ff91eba741-kube-api-access-mzxg8\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:27 crc kubenswrapper[4835]: E0129 16:01:27.462282 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81c31e6b_d1e2_4178_a905_529bf3b59f4b.slice/crio-conmon-2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81c31e6b_d1e2_4178_a905_529bf3b59f4b.slice/crio-2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181.scope\": RecentStats: unable to find data in memory cache]" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.521059 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.624887 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzxdf\" (UniqueName: \"kubernetes.io/projected/81c31e6b-d1e2-4178-a905-529bf3b59f4b-kube-api-access-dzxdf\") pod \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.625109 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-config-data\") pod \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.625285 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-combined-ca-bundle\") pod \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\" (UID: \"81c31e6b-d1e2-4178-a905-529bf3b59f4b\") " Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.630331 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81c31e6b-d1e2-4178-a905-529bf3b59f4b-kube-api-access-dzxdf" (OuterVolumeSpecName: "kube-api-access-dzxdf") pod "81c31e6b-d1e2-4178-a905-529bf3b59f4b" (UID: "81c31e6b-d1e2-4178-a905-529bf3b59f4b"). InnerVolumeSpecName "kube-api-access-dzxdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.653692 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81c31e6b-d1e2-4178-a905-529bf3b59f4b" (UID: "81c31e6b-d1e2-4178-a905-529bf3b59f4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.654819 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-config-data" (OuterVolumeSpecName: "config-data") pod "81c31e6b-d1e2-4178-a905-529bf3b59f4b" (UID: "81c31e6b-d1e2-4178-a905-529bf3b59f4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.707726 4835 generic.go:334] "Generic (PLEG): container finished" podID="28230ab4-6e4e-497e-9c31-22ff91eba741" containerID="8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff" exitCode=2 Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.707813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"28230ab4-6e4e-497e-9c31-22ff91eba741","Type":"ContainerDied","Data":"8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff"} Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.707846 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"28230ab4-6e4e-497e-9c31-22ff91eba741","Type":"ContainerDied","Data":"b530027dc5e939b6100e0ce06ad986e1b85c9fd3c7c757ef6f6df904cff24e57"} Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.707845 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.707865 4835 scope.go:117] "RemoveContainer" containerID="8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.709367 4835 generic.go:334] "Generic (PLEG): container finished" podID="81c31e6b-d1e2-4178-a905-529bf3b59f4b" containerID="2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181" exitCode=137 Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.709980 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81c31e6b-d1e2-4178-a905-529bf3b59f4b","Type":"ContainerDied","Data":"2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181"} Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.710095 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"81c31e6b-d1e2-4178-a905-529bf3b59f4b","Type":"ContainerDied","Data":"53d76f47104ea2d5f5562cc48cb31a91bc3b33941bbae2ae7da4a78016f840c7"} Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.710007 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.710387 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.716026 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.727535 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzxdf\" (UniqueName: \"kubernetes.io/projected/81c31e6b-d1e2-4178-a905-529bf3b59f4b-kube-api-access-dzxdf\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.727569 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.727582 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81c31e6b-d1e2-4178-a905-529bf3b59f4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.735028 4835 scope.go:117] "RemoveContainer" containerID="8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff" Jan 29 16:01:27 crc kubenswrapper[4835]: E0129 16:01:27.735519 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff\": container with ID starting with 8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff not found: ID does not exist" containerID="8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.735556 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff"} err="failed to get container status \"8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff\": rpc error: code = NotFound desc = could not find container \"8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff\": container with ID starting with 8a24bc33aa78b5b3b199f6ea3f5b6f9167d7031035cdb819e06f171e780439ff not found: ID does not exist" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.735581 4835 scope.go:117] "RemoveContainer" containerID="2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.765418 4835 scope.go:117] "RemoveContainer" containerID="2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181" Jan 29 16:01:27 crc kubenswrapper[4835]: E0129 16:01:27.768901 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181\": container with ID starting with 2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181 not found: ID does not exist" containerID="2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.769060 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181"} err="failed to get container status \"2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181\": rpc error: code = NotFound desc = could not find container \"2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181\": container with ID starting with 2311846fbe08b661bed12d4e3c4cd942b9dc89884dca1dabce5cf64d18f33181 not found: ID does not exist" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.800413 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.813572 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.826252 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: E0129 16:01:27.826820 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28230ab4-6e4e-497e-9c31-22ff91eba741" containerName="kube-state-metrics" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.826886 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="28230ab4-6e4e-497e-9c31-22ff91eba741" containerName="kube-state-metrics" Jan 29 16:01:27 crc kubenswrapper[4835]: E0129 16:01:27.826942 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81c31e6b-d1e2-4178-a905-529bf3b59f4b" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.826987 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="81c31e6b-d1e2-4178-a905-529bf3b59f4b" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.827223 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="81c31e6b-d1e2-4178-a905-529bf3b59f4b" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.827300 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="28230ab4-6e4e-497e-9c31-22ff91eba741" containerName="kube-state-metrics" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.827958 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.835787 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.837657 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.840896 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.852539 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.865544 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.878188 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.910915 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.914831 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.915071 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.932063 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.933838 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.934012 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.934161 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcwv4\" (UniqueName: \"kubernetes.io/projected/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-api-access-gcwv4\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:27 crc kubenswrapper[4835]: I0129 16:01:27.932358 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.005768 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.037574 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.037666 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.037686 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.037732 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.037750 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.044760 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.044860 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk68z\" (UniqueName: \"kubernetes.io/projected/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-kube-api-access-gk68z\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.044935 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.045019 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcwv4\" (UniqueName: \"kubernetes.io/projected/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-api-access-gcwv4\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.046539 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.061121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.069545 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-867cd545c7-74wd6"] Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.071737 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.079139 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-867cd545c7-74wd6"] Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.081887 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.087179 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcwv4\" (UniqueName: \"kubernetes.io/projected/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-api-access-gcwv4\") pod \"kube-state-metrics-0\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.148790 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.148891 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.148919 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.148986 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk68z\" (UniqueName: \"kubernetes.io/projected/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-kube-api-access-gk68z\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.149020 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.164178 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.173262 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.180066 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk68z\" (UniqueName: \"kubernetes.io/projected/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-kube-api-access-gk68z\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.184946 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.191156 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.193955 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.256251 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-swift-storage-0\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.256694 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.258569 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-config\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.259201 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.259354 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-svc\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.259485 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98d9h\" (UniqueName: \"kubernetes.io/projected/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-kube-api-access-98d9h\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.271356 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.363720 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-swift-storage-0\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.364046 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.364151 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-config\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.364181 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.364199 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-svc\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.364326 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98d9h\" (UniqueName: \"kubernetes.io/projected/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-kube-api-access-98d9h\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.364858 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-swift-storage-0\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.365041 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-nb\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.365151 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-sb\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.365173 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-config\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.365936 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-svc\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.395670 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98d9h\" (UniqueName: \"kubernetes.io/projected/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-kube-api-access-98d9h\") pod \"dnsmasq-dns-867cd545c7-74wd6\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.507172 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28230ab4-6e4e-497e-9c31-22ff91eba741" path="/var/lib/kubelet/pods/28230ab4-6e4e-497e-9c31-22ff91eba741/volumes" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.508073 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81c31e6b-d1e2-4178-a905-529bf3b59f4b" path="/var/lib/kubelet/pods/81c31e6b-d1e2-4178-a905-529bf3b59f4b/volumes" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.583979 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.743224 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:01:28 crc kubenswrapper[4835]: W0129 16:01:28.751769 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b578fda_bb52_4dce_8d76_5e8710aa08a8.slice/crio-55bef02e0bdb3da607f6297ccbc16cdfdfdd946590e9eb66d4a3d51faf49d271 WatchSource:0}: Error finding container 55bef02e0bdb3da607f6297ccbc16cdfdfdd946590e9eb66d4a3d51faf49d271: Status 404 returned error can't find the container with id 55bef02e0bdb3da607f6297ccbc16cdfdfdd946590e9eb66d4a3d51faf49d271 Jan 29 16:01:28 crc kubenswrapper[4835]: I0129 16:01:28.839069 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:01:28 crc kubenswrapper[4835]: W0129 16:01:28.847236 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf082fcc3_a213_48ea_a0f0_359e93d9ce1c.slice/crio-15a08c9386b462d445956ed113b63396be4bd10809d2f7324701f1e19fc548f5 WatchSource:0}: Error finding container 15a08c9386b462d445956ed113b63396be4bd10809d2f7324701f1e19fc548f5: Status 404 returned error can't find the container with id 15a08c9386b462d445956ed113b63396be4bd10809d2f7324701f1e19fc548f5 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.095081 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-867cd545c7-74wd6"] Jan 29 16:01:29 crc kubenswrapper[4835]: W0129 16:01:29.097672 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b92181d_1298_4d3c_93d8_92ee9abb2e1b.slice/crio-7d1d2c1390098635fcb33a3026efdfe2342c5ef29bac89744b4036718965f252 WatchSource:0}: Error finding container 7d1d2c1390098635fcb33a3026efdfe2342c5ef29bac89744b4036718965f252: Status 404 returned error can't find the container with id 7d1d2c1390098635fcb33a3026efdfe2342c5ef29bac89744b4036718965f252 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.647637 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.648310 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="sg-core" containerID="cri-o://815a87a6496ef6eef743cf2be54f4fb51b0176be5179f4ab3e8d95b561a5a999" gracePeriod=30 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.648385 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-notification-agent" containerID="cri-o://9a55065beb00a16ee66830598aa3988f30bca1e1c7404cd78f965176724c270d" gracePeriod=30 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.648412 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="proxy-httpd" containerID="cri-o://efef5c2434c64fc498f94187b303b93ccc7229269b7c6f889de76a7809116c0b" gracePeriod=30 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.652449 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-central-agent" containerID="cri-o://df1ae7ef1baa5c24dbe5f47dc713db2de333f1ad6a4144d55433a36d695a3716" gracePeriod=30 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.738032 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f082fcc3-a213-48ea-a0f0-359e93d9ce1c","Type":"ContainerStarted","Data":"2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f"} Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.738074 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f082fcc3-a213-48ea-a0f0-359e93d9ce1c","Type":"ContainerStarted","Data":"15a08c9386b462d445956ed113b63396be4bd10809d2f7324701f1e19fc548f5"} Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.739279 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b578fda-bb52-4dce-8d76-5e8710aa08a8","Type":"ContainerStarted","Data":"55bef02e0bdb3da607f6297ccbc16cdfdfdd946590e9eb66d4a3d51faf49d271"} Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.740968 4835 generic.go:334] "Generic (PLEG): container finished" podID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerID="62883f1aedbcfa6977700b6a988ff4f2b15c345d045d8729a65254788e47da6d" exitCode=0 Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.741048 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" event={"ID":"7b92181d-1298-4d3c-93d8-92ee9abb2e1b","Type":"ContainerDied","Data":"62883f1aedbcfa6977700b6a988ff4f2b15c345d045d8729a65254788e47da6d"} Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.741096 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" event={"ID":"7b92181d-1298-4d3c-93d8-92ee9abb2e1b","Type":"ContainerStarted","Data":"7d1d2c1390098635fcb33a3026efdfe2342c5ef29bac89744b4036718965f252"} Jan 29 16:01:29 crc kubenswrapper[4835]: I0129 16:01:29.791202 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.791181537 podStartE2EDuration="2.791181537s" podCreationTimestamp="2026-01-29 16:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:29.762714598 +0000 UTC m=+1991.955758252" watchObservedRunningTime="2026-01-29 16:01:29.791181537 +0000 UTC m=+1991.984225191" Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.721034 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.768225 4835 generic.go:334] "Generic (PLEG): container finished" podID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerID="efef5c2434c64fc498f94187b303b93ccc7229269b7c6f889de76a7809116c0b" exitCode=0 Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.768273 4835 generic.go:334] "Generic (PLEG): container finished" podID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerID="815a87a6496ef6eef743cf2be54f4fb51b0176be5179f4ab3e8d95b561a5a999" exitCode=2 Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.768407 4835 generic.go:334] "Generic (PLEG): container finished" podID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerID="df1ae7ef1baa5c24dbe5f47dc713db2de333f1ad6a4144d55433a36d695a3716" exitCode=0 Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.768853 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerDied","Data":"efef5c2434c64fc498f94187b303b93ccc7229269b7c6f889de76a7809116c0b"} Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.768897 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerDied","Data":"815a87a6496ef6eef743cf2be54f4fb51b0176be5179f4ab3e8d95b561a5a999"} Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.768944 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerDied","Data":"df1ae7ef1baa5c24dbe5f47dc713db2de333f1ad6a4144d55433a36d695a3716"} Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.774827 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b578fda-bb52-4dce-8d76-5e8710aa08a8","Type":"ContainerStarted","Data":"0b587848172b5d3c1b88a5ab6e25578e5214132176aa1d24b7c6b21960df93b0"} Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.775157 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.793329 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-log" containerID="cri-o://91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690" gracePeriod=30 Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.793628 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" event={"ID":"7b92181d-1298-4d3c-93d8-92ee9abb2e1b","Type":"ContainerStarted","Data":"a4bebb7a38805e14b3909ac1eed6f7c8efdae09d6f1fad23656e12533bd81e37"} Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.795209 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.795279 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-api" containerID="cri-o://d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e" gracePeriod=30 Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.810782 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.55612485 podStartE2EDuration="3.810749588s" podCreationTimestamp="2026-01-29 16:01:27 +0000 UTC" firstStartedPulling="2026-01-29 16:01:28.753681193 +0000 UTC m=+1990.946724847" lastFinishedPulling="2026-01-29 16:01:30.008305931 +0000 UTC m=+1992.201349585" observedRunningTime="2026-01-29 16:01:30.807367793 +0000 UTC m=+1993.000411447" watchObservedRunningTime="2026-01-29 16:01:30.810749588 +0000 UTC m=+1993.003793242" Jan 29 16:01:30 crc kubenswrapper[4835]: I0129 16:01:30.836826 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" podStartSLOduration=3.836808756 podStartE2EDuration="3.836808756s" podCreationTimestamp="2026-01-29 16:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:30.833945984 +0000 UTC m=+1993.026989638" watchObservedRunningTime="2026-01-29 16:01:30.836808756 +0000 UTC m=+1993.029852410" Jan 29 16:01:31 crc kubenswrapper[4835]: I0129 16:01:31.803736 4835 generic.go:334] "Generic (PLEG): container finished" podID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerID="91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690" exitCode=143 Jan 29 16:01:31 crc kubenswrapper[4835]: I0129 16:01:31.803813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b38b2b8d-04e6-4471-9110-4334f8112a6b","Type":"ContainerDied","Data":"91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690"} Jan 29 16:01:33 crc kubenswrapper[4835]: I0129 16:01:33.273130 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.446536 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.591386 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-config-data\") pod \"b38b2b8d-04e6-4471-9110-4334f8112a6b\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.591438 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b38b2b8d-04e6-4471-9110-4334f8112a6b-logs\") pod \"b38b2b8d-04e6-4471-9110-4334f8112a6b\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.591705 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-combined-ca-bundle\") pod \"b38b2b8d-04e6-4471-9110-4334f8112a6b\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.591781 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbgmv\" (UniqueName: \"kubernetes.io/projected/b38b2b8d-04e6-4471-9110-4334f8112a6b-kube-api-access-rbgmv\") pod \"b38b2b8d-04e6-4471-9110-4334f8112a6b\" (UID: \"b38b2b8d-04e6-4471-9110-4334f8112a6b\") " Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.592082 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b38b2b8d-04e6-4471-9110-4334f8112a6b-logs" (OuterVolumeSpecName: "logs") pod "b38b2b8d-04e6-4471-9110-4334f8112a6b" (UID: "b38b2b8d-04e6-4471-9110-4334f8112a6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.592644 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b38b2b8d-04e6-4471-9110-4334f8112a6b-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.602438 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38b2b8d-04e6-4471-9110-4334f8112a6b-kube-api-access-rbgmv" (OuterVolumeSpecName: "kube-api-access-rbgmv") pod "b38b2b8d-04e6-4471-9110-4334f8112a6b" (UID: "b38b2b8d-04e6-4471-9110-4334f8112a6b"). InnerVolumeSpecName "kube-api-access-rbgmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.625236 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-config-data" (OuterVolumeSpecName: "config-data") pod "b38b2b8d-04e6-4471-9110-4334f8112a6b" (UID: "b38b2b8d-04e6-4471-9110-4334f8112a6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.637960 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b38b2b8d-04e6-4471-9110-4334f8112a6b" (UID: "b38b2b8d-04e6-4471-9110-4334f8112a6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.693950 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.693990 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbgmv\" (UniqueName: \"kubernetes.io/projected/b38b2b8d-04e6-4471-9110-4334f8112a6b-kube-api-access-rbgmv\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.694005 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b38b2b8d-04e6-4471-9110-4334f8112a6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.843357 4835 generic.go:334] "Generic (PLEG): container finished" podID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerID="d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e" exitCode=0 Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.843403 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b38b2b8d-04e6-4471-9110-4334f8112a6b","Type":"ContainerDied","Data":"d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e"} Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.843432 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b38b2b8d-04e6-4471-9110-4334f8112a6b","Type":"ContainerDied","Data":"d73897f3fb1729dff5a469f26577f288b3681b327a68283dff18a164a01771e2"} Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.843452 4835 scope.go:117] "RemoveContainer" containerID="d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.843733 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.873104 4835 scope.go:117] "RemoveContainer" containerID="91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.877800 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.891480 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.898981 4835 scope.go:117] "RemoveContainer" containerID="d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e" Jan 29 16:01:34 crc kubenswrapper[4835]: E0129 16:01:34.899450 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e\": container with ID starting with d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e not found: ID does not exist" containerID="d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.899509 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e"} err="failed to get container status \"d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e\": rpc error: code = NotFound desc = could not find container \"d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e\": container with ID starting with d82b11245ee30cc2b817d17c543ced3422cb81583ca306dd356bab31bf3a413e not found: ID does not exist" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.899540 4835 scope.go:117] "RemoveContainer" containerID="91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690" Jan 29 16:01:34 crc kubenswrapper[4835]: E0129 16:01:34.899928 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690\": container with ID starting with 91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690 not found: ID does not exist" containerID="91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.899992 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690"} err="failed to get container status \"91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690\": rpc error: code = NotFound desc = could not find container \"91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690\": container with ID starting with 91f6a3a44bf56fbd50812845c549bc10768ff0fb0b68c6dacc83615cf7454690 not found: ID does not exist" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.902216 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:34 crc kubenswrapper[4835]: E0129 16:01:34.902969 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-api" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.902997 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-api" Jan 29 16:01:34 crc kubenswrapper[4835]: E0129 16:01:34.903018 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-log" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.903025 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-log" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.908094 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-log" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.908172 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" containerName="nova-api-api" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.912211 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.914835 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.915039 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.915125 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:01:34 crc kubenswrapper[4835]: I0129 16:01:34.923634 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.102092 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-public-tls-certs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.102197 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14bd6525-538d-46df-9a67-8bc3e317f8d3-logs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.102277 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.102331 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.102370 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlm9\" (UniqueName: \"kubernetes.io/projected/14bd6525-538d-46df-9a67-8bc3e317f8d3-kube-api-access-djlm9\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.102674 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-config-data\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.204612 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-public-tls-certs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.204681 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14bd6525-538d-46df-9a67-8bc3e317f8d3-logs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.204723 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.204740 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.204760 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djlm9\" (UniqueName: \"kubernetes.io/projected/14bd6525-538d-46df-9a67-8bc3e317f8d3-kube-api-access-djlm9\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.204814 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-config-data\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.205253 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14bd6525-538d-46df-9a67-8bc3e317f8d3-logs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.210921 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.210997 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.211064 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-config-data\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.216762 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-public-tls-certs\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.227031 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djlm9\" (UniqueName: \"kubernetes.io/projected/14bd6525-538d-46df-9a67-8bc3e317f8d3-kube-api-access-djlm9\") pod \"nova-api-0\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.278992 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.738934 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:35 crc kubenswrapper[4835]: W0129 16:01:35.743624 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14bd6525_538d_46df_9a67_8bc3e317f8d3.slice/crio-9838cc28015d1d8ee6f425f2d585737172929bef6e407ffd377ee616e135ab3c WatchSource:0}: Error finding container 9838cc28015d1d8ee6f425f2d585737172929bef6e407ffd377ee616e135ab3c: Status 404 returned error can't find the container with id 9838cc28015d1d8ee6f425f2d585737172929bef6e407ffd377ee616e135ab3c Jan 29 16:01:35 crc kubenswrapper[4835]: I0129 16:01:35.863506 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14bd6525-538d-46df-9a67-8bc3e317f8d3","Type":"ContainerStarted","Data":"9838cc28015d1d8ee6f425f2d585737172929bef6e407ffd377ee616e135ab3c"} Jan 29 16:01:36 crc kubenswrapper[4835]: I0129 16:01:36.502043 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b38b2b8d-04e6-4471-9110-4334f8112a6b" path="/var/lib/kubelet/pods/b38b2b8d-04e6-4471-9110-4334f8112a6b/volumes" Jan 29 16:01:36 crc kubenswrapper[4835]: I0129 16:01:36.876842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14bd6525-538d-46df-9a67-8bc3e317f8d3","Type":"ContainerStarted","Data":"add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e"} Jan 29 16:01:36 crc kubenswrapper[4835]: I0129 16:01:36.876991 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14bd6525-538d-46df-9a67-8bc3e317f8d3","Type":"ContainerStarted","Data":"e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13"} Jan 29 16:01:36 crc kubenswrapper[4835]: I0129 16:01:36.906546 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.906515475 podStartE2EDuration="2.906515475s" podCreationTimestamp="2026-01-29 16:01:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:36.898162344 +0000 UTC m=+1999.091206008" watchObservedRunningTime="2026-01-29 16:01:36.906515475 +0000 UTC m=+1999.099559139" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.175068 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.273205 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.293167 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.421672 4835 scope.go:117] "RemoveContainer" containerID="7feda011de9a93ccfd38692539f0990c15e4ad836ec77c3c23cafc75462556a8" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.455964 4835 scope.go:117] "RemoveContainer" containerID="d486c746133c4c2e4445f045e011e1493dcb98380c30e9f30c9d4e8121cf946a" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.495636 4835 scope.go:117] "RemoveContainer" containerID="5740c3d748b1ad4713421af69e666a45167c69ae6198cdd359e44823932def4c" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.561867 4835 scope.go:117] "RemoveContainer" containerID="0f8ecdbdc7f45c75fa3ef0383d8fa264b1db898d64804f621587e9eb1a33135e" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.586066 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.626327 4835 scope.go:117] "RemoveContainer" containerID="9af6db8002bb1c810886d469801671396039c3d9f5118f544aa527d7141d7806" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.684162 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bfb54f9b5-h854c"] Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.684737 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerName="dnsmasq-dns" containerID="cri-o://9befa5acfbb6bcdfabe6a343d563bb32b4c20b0d8f30705f544d57834a8eea07" gracePeriod=10 Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.694534 4835 scope.go:117] "RemoveContainer" containerID="6e3e934cc0998fe138c66e7a286f17dfcf35ba6c9c1ae47bd90391a8d2a3505d" Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.903913 4835 generic.go:334] "Generic (PLEG): container finished" podID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerID="9befa5acfbb6bcdfabe6a343d563bb32b4c20b0d8f30705f544d57834a8eea07" exitCode=0 Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.903987 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" event={"ID":"82569a58-8a40-423d-8d86-ed3bf74d59d6","Type":"ContainerDied","Data":"9befa5acfbb6bcdfabe6a343d563bb32b4c20b0d8f30705f544d57834a8eea07"} Jan 29 16:01:38 crc kubenswrapper[4835]: I0129 16:01:38.950190 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.274411 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.279741 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-ltvns"] Jan 29 16:01:39 crc kubenswrapper[4835]: E0129 16:01:39.280275 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerName="init" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.280295 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerName="init" Jan 29 16:01:39 crc kubenswrapper[4835]: E0129 16:01:39.280333 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerName="dnsmasq-dns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.280342 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerName="dnsmasq-dns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.280575 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" containerName="dnsmasq-dns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.281375 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.284664 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.284771 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.289611 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ltvns"] Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324055 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-config\") pod \"82569a58-8a40-423d-8d86-ed3bf74d59d6\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324115 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvd7v\" (UniqueName: \"kubernetes.io/projected/82569a58-8a40-423d-8d86-ed3bf74d59d6-kube-api-access-zvd7v\") pod \"82569a58-8a40-423d-8d86-ed3bf74d59d6\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324182 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-sb\") pod \"82569a58-8a40-423d-8d86-ed3bf74d59d6\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324235 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-swift-storage-0\") pod \"82569a58-8a40-423d-8d86-ed3bf74d59d6\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-svc\") pod \"82569a58-8a40-423d-8d86-ed3bf74d59d6\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324345 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-nb\") pod \"82569a58-8a40-423d-8d86-ed3bf74d59d6\" (UID: \"82569a58-8a40-423d-8d86-ed3bf74d59d6\") " Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324496 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-config-data\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.324600 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgxp\" (UniqueName: \"kubernetes.io/projected/45494e83-6ddf-4d30-8c01-2ffc335849d3-kube-api-access-mbgxp\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.325407 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.325953 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-scripts\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.346317 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82569a58-8a40-423d-8d86-ed3bf74d59d6-kube-api-access-zvd7v" (OuterVolumeSpecName: "kube-api-access-zvd7v") pod "82569a58-8a40-423d-8d86-ed3bf74d59d6" (UID: "82569a58-8a40-423d-8d86-ed3bf74d59d6"). InnerVolumeSpecName "kube-api-access-zvd7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.387631 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-config" (OuterVolumeSpecName: "config") pod "82569a58-8a40-423d-8d86-ed3bf74d59d6" (UID: "82569a58-8a40-423d-8d86-ed3bf74d59d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.406067 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82569a58-8a40-423d-8d86-ed3bf74d59d6" (UID: "82569a58-8a40-423d-8d86-ed3bf74d59d6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.416151 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82569a58-8a40-423d-8d86-ed3bf74d59d6" (UID: "82569a58-8a40-423d-8d86-ed3bf74d59d6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.417000 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82569a58-8a40-423d-8d86-ed3bf74d59d6" (UID: "82569a58-8a40-423d-8d86-ed3bf74d59d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.421726 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82569a58-8a40-423d-8d86-ed3bf74d59d6" (UID: "82569a58-8a40-423d-8d86-ed3bf74d59d6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.426853 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-config-data\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427003 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbgxp\" (UniqueName: \"kubernetes.io/projected/45494e83-6ddf-4d30-8c01-2ffc335849d3-kube-api-access-mbgxp\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427064 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427130 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-scripts\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427221 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427237 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427250 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427261 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427273 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82569a58-8a40-423d-8d86-ed3bf74d59d6-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.427287 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvd7v\" (UniqueName: \"kubernetes.io/projected/82569a58-8a40-423d-8d86-ed3bf74d59d6-kube-api-access-zvd7v\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.431937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.432309 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-scripts\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.433407 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-config-data\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.455342 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbgxp\" (UniqueName: \"kubernetes.io/projected/45494e83-6ddf-4d30-8c01-2ffc335849d3-kube-api-access-mbgxp\") pod \"nova-cell1-cell-mapping-ltvns\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.604645 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.921414 4835 generic.go:334] "Generic (PLEG): container finished" podID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerID="9a55065beb00a16ee66830598aa3988f30bca1e1c7404cd78f965176724c270d" exitCode=0 Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.921502 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerDied","Data":"9a55065beb00a16ee66830598aa3988f30bca1e1c7404cd78f965176724c270d"} Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.921978 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39fdefdc-1d93-4567-9d20-33f3c3c20051","Type":"ContainerDied","Data":"41f963ea175524a778168a5e7f8f10e8746ae526ccc8704cb39410270f3a0006"} Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.922004 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f963ea175524a778168a5e7f8f10e8746ae526ccc8704cb39410270f3a0006" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.924700 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" event={"ID":"82569a58-8a40-423d-8d86-ed3bf74d59d6","Type":"ContainerDied","Data":"9c03770b052ec6884dc99ab74f1866c5d65116866a18915c736571466cbd4800"} Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.924785 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bfb54f9b5-h854c" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.924802 4835 scope.go:117] "RemoveContainer" containerID="9befa5acfbb6bcdfabe6a343d563bb32b4c20b0d8f30705f544d57834a8eea07" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.973921 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.991428 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bfb54f9b5-h854c"] Jan 29 16:01:39 crc kubenswrapper[4835]: I0129 16:01:39.992355 4835 scope.go:117] "RemoveContainer" containerID="94626dd62055afecbe14f7670ed31c0c367a19f4ff148cde4b91f5446a2113ef" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.003859 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bfb54f9b5-h854c"] Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.106053 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-ltvns"] Jan 29 16:01:40 crc kubenswrapper[4835]: W0129 16:01:40.115208 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45494e83_6ddf_4d30_8c01_2ffc335849d3.slice/crio-e9d0bf59e8b2dc570d9a6848f3f7222d23ebaa31d9d13939ca46e3fde2bd7f60 WatchSource:0}: Error finding container e9d0bf59e8b2dc570d9a6848f3f7222d23ebaa31d9d13939ca46e3fde2bd7f60: Status 404 returned error can't find the container with id e9d0bf59e8b2dc570d9a6848f3f7222d23ebaa31d9d13939ca46e3fde2bd7f60 Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.141595 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-run-httpd\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.141647 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-sg-core-conf-yaml\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.141875 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xxlm\" (UniqueName: \"kubernetes.io/projected/39fdefdc-1d93-4567-9d20-33f3c3c20051-kube-api-access-9xxlm\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.141901 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-log-httpd\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.141927 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-scripts\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.141951 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-config-data\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.142012 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-combined-ca-bundle\") pod \"39fdefdc-1d93-4567-9d20-33f3c3c20051\" (UID: \"39fdefdc-1d93-4567-9d20-33f3c3c20051\") " Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.143521 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.144892 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.146851 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39fdefdc-1d93-4567-9d20-33f3c3c20051-kube-api-access-9xxlm" (OuterVolumeSpecName: "kube-api-access-9xxlm") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "kube-api-access-9xxlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.149495 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-scripts" (OuterVolumeSpecName: "scripts") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.178619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.227663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.245711 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.245756 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.245778 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xxlm\" (UniqueName: \"kubernetes.io/projected/39fdefdc-1d93-4567-9d20-33f3c3c20051-kube-api-access-9xxlm\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.245787 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39fdefdc-1d93-4567-9d20-33f3c3c20051-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.245795 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.245803 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.252491 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-config-data" (OuterVolumeSpecName: "config-data") pod "39fdefdc-1d93-4567-9d20-33f3c3c20051" (UID: "39fdefdc-1d93-4567-9d20-33f3c3c20051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.347154 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39fdefdc-1d93-4567-9d20-33f3c3c20051-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.505110 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82569a58-8a40-423d-8d86-ed3bf74d59d6" path="/var/lib/kubelet/pods/82569a58-8a40-423d-8d86-ed3bf74d59d6/volumes" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.936875 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ltvns" event={"ID":"45494e83-6ddf-4d30-8c01-2ffc335849d3","Type":"ContainerStarted","Data":"3856a12e79129eb6cfdf75bb770bfb4d042aade7bcef7272efcbfe1248749801"} Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.937648 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ltvns" event={"ID":"45494e83-6ddf-4d30-8c01-2ffc335849d3","Type":"ContainerStarted","Data":"e9d0bf59e8b2dc570d9a6848f3f7222d23ebaa31d9d13939ca46e3fde2bd7f60"} Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.940305 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:01:40 crc kubenswrapper[4835]: I0129 16:01:40.975344 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-ltvns" podStartSLOduration=1.9753213 podStartE2EDuration="1.9753213s" podCreationTimestamp="2026-01-29 16:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:40.955365936 +0000 UTC m=+2003.148409590" watchObservedRunningTime="2026-01-29 16:01:40.9753213 +0000 UTC m=+2003.168364944" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.001528 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.019795 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.032369 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:01:41 crc kubenswrapper[4835]: E0129 16:01:41.032902 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-notification-agent" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.032920 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-notification-agent" Jan 29 16:01:41 crc kubenswrapper[4835]: E0129 16:01:41.032942 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="sg-core" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.032949 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="sg-core" Jan 29 16:01:41 crc kubenswrapper[4835]: E0129 16:01:41.032958 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="proxy-httpd" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.032964 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="proxy-httpd" Jan 29 16:01:41 crc kubenswrapper[4835]: E0129 16:01:41.032973 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-central-agent" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.032979 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-central-agent" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.033218 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="sg-core" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.033231 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-notification-agent" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.033237 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="proxy-httpd" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.033262 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" containerName="ceilometer-central-agent" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.034969 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.039029 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.039836 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.039366 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.039417 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.069783 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.069827 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.069869 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qf2v\" (UniqueName: \"kubernetes.io/projected/c15e330a-80e7-4940-b1ec-f63c07d35310-kube-api-access-5qf2v\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.070098 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-run-httpd\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.070180 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-scripts\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.070349 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-log-httpd\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.070406 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-config-data\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.070442 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.171812 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-scripts\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.171883 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-log-httpd\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.171908 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-config-data\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.171938 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.171974 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.171994 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.172041 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qf2v\" (UniqueName: \"kubernetes.io/projected/c15e330a-80e7-4940-b1ec-f63c07d35310-kube-api-access-5qf2v\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.172127 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-run-httpd\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.172897 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-log-httpd\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.173311 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-run-httpd\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.177522 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.178012 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.178279 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-scripts\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.178784 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.192727 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-config-data\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.194545 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qf2v\" (UniqueName: \"kubernetes.io/projected/c15e330a-80e7-4940-b1ec-f63c07d35310-kube-api-access-5qf2v\") pod \"ceilometer-0\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.352673 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.825596 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:01:41 crc kubenswrapper[4835]: I0129 16:01:41.949616 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerStarted","Data":"dd4b266c10ade40caf71f506ea441368ab25991c57b67db88ca35c538bfea73e"} Jan 29 16:01:42 crc kubenswrapper[4835]: I0129 16:01:42.505730 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39fdefdc-1d93-4567-9d20-33f3c3c20051" path="/var/lib/kubelet/pods/39fdefdc-1d93-4567-9d20-33f3c3c20051/volumes" Jan 29 16:01:42 crc kubenswrapper[4835]: I0129 16:01:42.961562 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerStarted","Data":"2eb7a757393f8bf70cc9374d3de67c78e1d018eb5513f80056f8cab73670e577"} Jan 29 16:01:43 crc kubenswrapper[4835]: I0129 16:01:43.972374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerStarted","Data":"51cd8eb067ac6ab7e38ed9b480cf21bc4f6e8171ac2b87b3dc825503ca0a087c"} Jan 29 16:01:44 crc kubenswrapper[4835]: I0129 16:01:44.989255 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerStarted","Data":"68f885bfbd51396b1e518a5b7261073d4617342910af6cce3149e027107f651e"} Jan 29 16:01:45 crc kubenswrapper[4835]: I0129 16:01:45.280596 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:01:45 crc kubenswrapper[4835]: I0129 16:01:45.280653 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:01:46 crc kubenswrapper[4835]: I0129 16:01:46.002021 4835 generic.go:334] "Generic (PLEG): container finished" podID="45494e83-6ddf-4d30-8c01-2ffc335849d3" containerID="3856a12e79129eb6cfdf75bb770bfb4d042aade7bcef7272efcbfe1248749801" exitCode=0 Jan 29 16:01:46 crc kubenswrapper[4835]: I0129 16:01:46.002062 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ltvns" event={"ID":"45494e83-6ddf-4d30-8c01-2ffc335849d3","Type":"ContainerDied","Data":"3856a12e79129eb6cfdf75bb770bfb4d042aade7bcef7272efcbfe1248749801"} Jan 29 16:01:46 crc kubenswrapper[4835]: I0129 16:01:46.292662 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:01:46 crc kubenswrapper[4835]: I0129 16:01:46.292668 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.409919 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.595511 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-config-data\") pod \"45494e83-6ddf-4d30-8c01-2ffc335849d3\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.595889 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-combined-ca-bundle\") pod \"45494e83-6ddf-4d30-8c01-2ffc335849d3\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.596011 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-scripts\") pod \"45494e83-6ddf-4d30-8c01-2ffc335849d3\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.596042 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbgxp\" (UniqueName: \"kubernetes.io/projected/45494e83-6ddf-4d30-8c01-2ffc335849d3-kube-api-access-mbgxp\") pod \"45494e83-6ddf-4d30-8c01-2ffc335849d3\" (UID: \"45494e83-6ddf-4d30-8c01-2ffc335849d3\") " Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.603767 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-scripts" (OuterVolumeSpecName: "scripts") pod "45494e83-6ddf-4d30-8c01-2ffc335849d3" (UID: "45494e83-6ddf-4d30-8c01-2ffc335849d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.604627 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45494e83-6ddf-4d30-8c01-2ffc335849d3-kube-api-access-mbgxp" (OuterVolumeSpecName: "kube-api-access-mbgxp") pod "45494e83-6ddf-4d30-8c01-2ffc335849d3" (UID: "45494e83-6ddf-4d30-8c01-2ffc335849d3"). InnerVolumeSpecName "kube-api-access-mbgxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.626359 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45494e83-6ddf-4d30-8c01-2ffc335849d3" (UID: "45494e83-6ddf-4d30-8c01-2ffc335849d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.645066 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-config-data" (OuterVolumeSpecName: "config-data") pod "45494e83-6ddf-4d30-8c01-2ffc335849d3" (UID: "45494e83-6ddf-4d30-8c01-2ffc335849d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.698050 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.698090 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbgxp\" (UniqueName: \"kubernetes.io/projected/45494e83-6ddf-4d30-8c01-2ffc335849d3-kube-api-access-mbgxp\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.698108 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:47 crc kubenswrapper[4835]: I0129 16:01:47.698118 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45494e83-6ddf-4d30-8c01-2ffc335849d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.024529 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerStarted","Data":"ee958556fc843d154af6291223910bcec2cc7e0ab15659ebccce1239613326c9"} Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.024903 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.032928 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-ltvns" event={"ID":"45494e83-6ddf-4d30-8c01-2ffc335849d3","Type":"ContainerDied","Data":"e9d0bf59e8b2dc570d9a6848f3f7222d23ebaa31d9d13939ca46e3fde2bd7f60"} Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.032999 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9d0bf59e8b2dc570d9a6848f3f7222d23ebaa31d9d13939ca46e3fde2bd7f60" Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.033086 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-ltvns" Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.073288 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.179002461 podStartE2EDuration="8.073269859s" podCreationTimestamp="2026-01-29 16:01:40 +0000 UTC" firstStartedPulling="2026-01-29 16:01:41.820605539 +0000 UTC m=+2004.013649193" lastFinishedPulling="2026-01-29 16:01:47.714872937 +0000 UTC m=+2009.907916591" observedRunningTime="2026-01-29 16:01:48.050002381 +0000 UTC m=+2010.243046035" watchObservedRunningTime="2026-01-29 16:01:48.073269859 +0000 UTC m=+2010.266313503" Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.207728 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.208056 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-log" containerID="cri-o://e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13" gracePeriod=30 Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.208108 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-api" containerID="cri-o://add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e" gracePeriod=30 Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.221200 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.221414 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b1fa4a24-c047-4ce8-9120-5d23671f7937" containerName="nova-scheduler-scheduler" containerID="cri-o://f44c09514405a8d86cee1df1cacbbe2f6446d3cd016435d7056fa9ce060a5c34" gracePeriod=30 Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.254324 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.254626 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-log" containerID="cri-o://b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098" gracePeriod=30 Jan 29 16:01:48 crc kubenswrapper[4835]: I0129 16:01:48.255133 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-metadata" containerID="cri-o://52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b" gracePeriod=30 Jan 29 16:01:49 crc kubenswrapper[4835]: I0129 16:01:49.042509 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67df9f26-b1b6-450a-94c6-59928c23c485","Type":"ContainerDied","Data":"b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098"} Jan 29 16:01:49 crc kubenswrapper[4835]: I0129 16:01:49.042573 4835 generic.go:334] "Generic (PLEG): container finished" podID="67df9f26-b1b6-450a-94c6-59928c23c485" containerID="b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098" exitCode=143 Jan 29 16:01:49 crc kubenswrapper[4835]: I0129 16:01:49.044679 4835 generic.go:334] "Generic (PLEG): container finished" podID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerID="e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13" exitCode=143 Jan 29 16:01:49 crc kubenswrapper[4835]: I0129 16:01:49.044799 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14bd6525-538d-46df-9a67-8bc3e317f8d3","Type":"ContainerDied","Data":"e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13"} Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.067013 4835 generic.go:334] "Generic (PLEG): container finished" podID="b1fa4a24-c047-4ce8-9120-5d23671f7937" containerID="f44c09514405a8d86cee1df1cacbbe2f6446d3cd016435d7056fa9ce060a5c34" exitCode=0 Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.067069 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1fa4a24-c047-4ce8-9120-5d23671f7937","Type":"ContainerDied","Data":"f44c09514405a8d86cee1df1cacbbe2f6446d3cd016435d7056fa9ce060a5c34"} Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.399456 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:35958->10.217.0.201:8775: read: connection reset by peer" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.399493 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": read tcp 10.217.0.2:35968->10.217.0.201:8775: read: connection reset by peer" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.619021 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.767629 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.779173 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-combined-ca-bundle\") pod \"b1fa4a24-c047-4ce8-9120-5d23671f7937\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.779517 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp7gm\" (UniqueName: \"kubernetes.io/projected/b1fa4a24-c047-4ce8-9120-5d23671f7937-kube-api-access-cp7gm\") pod \"b1fa4a24-c047-4ce8-9120-5d23671f7937\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.779999 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-config-data\") pod \"b1fa4a24-c047-4ce8-9120-5d23671f7937\" (UID: \"b1fa4a24-c047-4ce8-9120-5d23671f7937\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.787853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1fa4a24-c047-4ce8-9120-5d23671f7937-kube-api-access-cp7gm" (OuterVolumeSpecName: "kube-api-access-cp7gm") pod "b1fa4a24-c047-4ce8-9120-5d23671f7937" (UID: "b1fa4a24-c047-4ce8-9120-5d23671f7937"). InnerVolumeSpecName "kube-api-access-cp7gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.812968 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-config-data" (OuterVolumeSpecName: "config-data") pod "b1fa4a24-c047-4ce8-9120-5d23671f7937" (UID: "b1fa4a24-c047-4ce8-9120-5d23671f7937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.818047 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1fa4a24-c047-4ce8-9120-5d23671f7937" (UID: "b1fa4a24-c047-4ce8-9120-5d23671f7937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.830264 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.881338 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djlm9\" (UniqueName: \"kubernetes.io/projected/14bd6525-538d-46df-9a67-8bc3e317f8d3-kube-api-access-djlm9\") pod \"14bd6525-538d-46df-9a67-8bc3e317f8d3\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.881419 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-public-tls-certs\") pod \"14bd6525-538d-46df-9a67-8bc3e317f8d3\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.881563 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-internal-tls-certs\") pod \"14bd6525-538d-46df-9a67-8bc3e317f8d3\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.881626 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14bd6525-538d-46df-9a67-8bc3e317f8d3-logs\") pod \"14bd6525-538d-46df-9a67-8bc3e317f8d3\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.881666 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-combined-ca-bundle\") pod \"14bd6525-538d-46df-9a67-8bc3e317f8d3\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.881718 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-config-data\") pod \"14bd6525-538d-46df-9a67-8bc3e317f8d3\" (UID: \"14bd6525-538d-46df-9a67-8bc3e317f8d3\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.882260 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp7gm\" (UniqueName: \"kubernetes.io/projected/b1fa4a24-c047-4ce8-9120-5d23671f7937-kube-api-access-cp7gm\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.882286 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.882301 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1fa4a24-c047-4ce8-9120-5d23671f7937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.882355 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14bd6525-538d-46df-9a67-8bc3e317f8d3-logs" (OuterVolumeSpecName: "logs") pod "14bd6525-538d-46df-9a67-8bc3e317f8d3" (UID: "14bd6525-538d-46df-9a67-8bc3e317f8d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.889809 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14bd6525-538d-46df-9a67-8bc3e317f8d3-kube-api-access-djlm9" (OuterVolumeSpecName: "kube-api-access-djlm9") pod "14bd6525-538d-46df-9a67-8bc3e317f8d3" (UID: "14bd6525-538d-46df-9a67-8bc3e317f8d3"). InnerVolumeSpecName "kube-api-access-djlm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.933253 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14bd6525-538d-46df-9a67-8bc3e317f8d3" (UID: "14bd6525-538d-46df-9a67-8bc3e317f8d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.935956 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-config-data" (OuterVolumeSpecName: "config-data") pod "14bd6525-538d-46df-9a67-8bc3e317f8d3" (UID: "14bd6525-538d-46df-9a67-8bc3e317f8d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.944766 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "14bd6525-538d-46df-9a67-8bc3e317f8d3" (UID: "14bd6525-538d-46df-9a67-8bc3e317f8d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.956366 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "14bd6525-538d-46df-9a67-8bc3e317f8d3" (UID: "14bd6525-538d-46df-9a67-8bc3e317f8d3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.987845 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-config-data\") pod \"67df9f26-b1b6-450a-94c6-59928c23c485\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.988110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-nova-metadata-tls-certs\") pod \"67df9f26-b1b6-450a-94c6-59928c23c485\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.991573 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-combined-ca-bundle\") pod \"67df9f26-b1b6-450a-94c6-59928c23c485\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.991804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98vng\" (UniqueName: \"kubernetes.io/projected/67df9f26-b1b6-450a-94c6-59928c23c485-kube-api-access-98vng\") pod \"67df9f26-b1b6-450a-94c6-59928c23c485\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.991942 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67df9f26-b1b6-450a-94c6-59928c23c485-logs\") pod \"67df9f26-b1b6-450a-94c6-59928c23c485\" (UID: \"67df9f26-b1b6-450a-94c6-59928c23c485\") " Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.992382 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67df9f26-b1b6-450a-94c6-59928c23c485-logs" (OuterVolumeSpecName: "logs") pod "67df9f26-b1b6-450a-94c6-59928c23c485" (UID: "67df9f26-b1b6-450a-94c6-59928c23c485"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993062 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djlm9\" (UniqueName: \"kubernetes.io/projected/14bd6525-538d-46df-9a67-8bc3e317f8d3-kube-api-access-djlm9\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993114 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993130 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993143 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14bd6525-538d-46df-9a67-8bc3e317f8d3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993155 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993166 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14bd6525-538d-46df-9a67-8bc3e317f8d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.993178 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67df9f26-b1b6-450a-94c6-59928c23c485-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:51 crc kubenswrapper[4835]: I0129 16:01:51.996417 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67df9f26-b1b6-450a-94c6-59928c23c485-kube-api-access-98vng" (OuterVolumeSpecName: "kube-api-access-98vng") pod "67df9f26-b1b6-450a-94c6-59928c23c485" (UID: "67df9f26-b1b6-450a-94c6-59928c23c485"). InnerVolumeSpecName "kube-api-access-98vng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.014786 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67df9f26-b1b6-450a-94c6-59928c23c485" (UID: "67df9f26-b1b6-450a-94c6-59928c23c485"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.019640 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-config-data" (OuterVolumeSpecName: "config-data") pod "67df9f26-b1b6-450a-94c6-59928c23c485" (UID: "67df9f26-b1b6-450a-94c6-59928c23c485"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.041982 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "67df9f26-b1b6-450a-94c6-59928c23c485" (UID: "67df9f26-b1b6-450a-94c6-59928c23c485"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.086326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b1fa4a24-c047-4ce8-9120-5d23671f7937","Type":"ContainerDied","Data":"58f75e4085bac0afe730d9ccdc39850b42a1d8f769ce946a3b5e7f4b42c9e904"} Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.086383 4835 scope.go:117] "RemoveContainer" containerID="f44c09514405a8d86cee1df1cacbbe2f6446d3cd016435d7056fa9ce060a5c34" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.086533 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.100601 4835 generic.go:334] "Generic (PLEG): container finished" podID="67df9f26-b1b6-450a-94c6-59928c23c485" containerID="52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b" exitCode=0 Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.100663 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67df9f26-b1b6-450a-94c6-59928c23c485","Type":"ContainerDied","Data":"52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b"} Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.100694 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"67df9f26-b1b6-450a-94c6-59928c23c485","Type":"ContainerDied","Data":"91fdc65319d398b52643785d1b929bf4d073f88e4bb95af752cb93315dc01e72"} Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.100766 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.103657 4835 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.103711 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.103728 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98vng\" (UniqueName: \"kubernetes.io/projected/67df9f26-b1b6-450a-94c6-59928c23c485-kube-api-access-98vng\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.103741 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67df9f26-b1b6-450a-94c6-59928c23c485-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.106045 4835 generic.go:334] "Generic (PLEG): container finished" podID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerID="add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e" exitCode=0 Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.106093 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14bd6525-538d-46df-9a67-8bc3e317f8d3","Type":"ContainerDied","Data":"add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e"} Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.106124 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14bd6525-538d-46df-9a67-8bc3e317f8d3","Type":"ContainerDied","Data":"9838cc28015d1d8ee6f425f2d585737172929bef6e407ffd377ee616e135ab3c"} Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.106183 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.189594 4835 scope.go:117] "RemoveContainer" containerID="52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.199167 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.226591 4835 scope.go:117] "RemoveContainer" containerID="b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.227245 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.279503 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.287369 4835 scope.go:117] "RemoveContainer" containerID="52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.288983 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b\": container with ID starting with 52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b not found: ID does not exist" containerID="52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.289030 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b"} err="failed to get container status \"52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b\": rpc error: code = NotFound desc = could not find container \"52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b\": container with ID starting with 52504c06036af13c87c6e98d34853eb77e74298a6bdd48676e403c52b303889b not found: ID does not exist" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.289055 4835 scope.go:117] "RemoveContainer" containerID="b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.292140 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.292801 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098\": container with ID starting with b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098 not found: ID does not exist" containerID="b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.292843 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098"} err="failed to get container status \"b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098\": rpc error: code = NotFound desc = could not find container \"b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098\": container with ID starting with b6b454f2298eb4a955e22a83b960f1e637fd2d030c11b9cede8de87bbea85098 not found: ID does not exist" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.292867 4835 scope.go:117] "RemoveContainer" containerID="add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.308823 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.317337 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.327687 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329264 4835 scope.go:117] "RemoveContainer" containerID="e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.329529 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-api" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329567 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-api" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.329591 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-log" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329598 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-log" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.329611 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-log" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329619 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-log" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.329636 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45494e83-6ddf-4d30-8c01-2ffc335849d3" containerName="nova-manage" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329643 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="45494e83-6ddf-4d30-8c01-2ffc335849d3" containerName="nova-manage" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.329658 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-metadata" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329665 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-metadata" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.329799 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1fa4a24-c047-4ce8-9120-5d23671f7937" containerName="nova-scheduler-scheduler" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.329809 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1fa4a24-c047-4ce8-9120-5d23671f7937" containerName="nova-scheduler-scheduler" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.330144 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1fa4a24-c047-4ce8-9120-5d23671f7937" containerName="nova-scheduler-scheduler" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.330172 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-api" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.330191 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="45494e83-6ddf-4d30-8c01-2ffc335849d3" containerName="nova-manage" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.330203 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-log" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.330226 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" containerName="nova-metadata-metadata" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.330243 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" containerName="nova-api-log" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.332511 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.334371 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.335388 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.336500 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.337861 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.340983 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.346774 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.349794 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.351735 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.351922 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.351993 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.356090 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.364090 4835 scope.go:117] "RemoveContainer" containerID="add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.366034 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.366049 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e\": container with ID starting with add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e not found: ID does not exist" containerID="add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.366114 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e"} err="failed to get container status \"add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e\": rpc error: code = NotFound desc = could not find container \"add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e\": container with ID starting with add4b86e2bdaeb7579bf60477714803b79f4e9c59e9131e3845717a2a4e5288e not found: ID does not exist" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.366145 4835 scope.go:117] "RemoveContainer" containerID="e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13" Jan 29 16:01:52 crc kubenswrapper[4835]: E0129 16:01:52.366629 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13\": container with ID starting with e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13 not found: ID does not exist" containerID="e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.366672 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13"} err="failed to get container status \"e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13\": rpc error: code = NotFound desc = could not find container \"e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13\": container with ID starting with e066ed1382f4ec7d393fae29e1ab4499b187f2c626b8d2715705956a8e211a13 not found: ID does not exist" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.378295 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.504859 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14bd6525-538d-46df-9a67-8bc3e317f8d3" path="/var/lib/kubelet/pods/14bd6525-538d-46df-9a67-8bc3e317f8d3/volumes" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.505483 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67df9f26-b1b6-450a-94c6-59928c23c485" path="/var/lib/kubelet/pods/67df9f26-b1b6-450a-94c6-59928c23c485/volumes" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.506075 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1fa4a24-c047-4ce8-9120-5d23671f7937" path="/var/lib/kubelet/pods/b1fa4a24-c047-4ce8-9120-5d23671f7937/volumes" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.517089 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-config-data\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.517147 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-config-data\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.517670 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-config-data\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518103 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn65q\" (UniqueName: \"kubernetes.io/projected/4af5e897-6640-4718-a326-3d241fc96840-kube-api-access-wn65q\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518143 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcdg9\" (UniqueName: \"kubernetes.io/projected/7276d970-f119-4ab0-9e02-69bca14fc8db-kube-api-access-gcdg9\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518184 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv99w\" (UniqueName: \"kubernetes.io/projected/2798812a-4bab-4c0e-a55d-f4c6232e67ae-kube-api-access-qv99w\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518200 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-public-tls-certs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518265 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518300 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4af5e897-6640-4718-a326-3d241fc96840-logs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518321 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518534 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518607 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518633 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.518698 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7276d970-f119-4ab0-9e02-69bca14fc8db-logs\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.619996 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-config-data\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620056 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-config-data\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620128 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn65q\" (UniqueName: \"kubernetes.io/projected/4af5e897-6640-4718-a326-3d241fc96840-kube-api-access-wn65q\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620150 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcdg9\" (UniqueName: \"kubernetes.io/projected/7276d970-f119-4ab0-9e02-69bca14fc8db-kube-api-access-gcdg9\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620174 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv99w\" (UniqueName: \"kubernetes.io/projected/2798812a-4bab-4c0e-a55d-f4c6232e67ae-kube-api-access-qv99w\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620192 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-public-tls-certs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620210 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620228 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4af5e897-6640-4718-a326-3d241fc96840-logs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620245 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620265 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620300 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620318 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620367 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7276d970-f119-4ab0-9e02-69bca14fc8db-logs\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.620428 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-config-data\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.621633 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7276d970-f119-4ab0-9e02-69bca14fc8db-logs\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.621677 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4af5e897-6640-4718-a326-3d241fc96840-logs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.625795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-config-data\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.626309 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.627682 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-public-tls-certs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.628208 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.628937 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-config-data\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.629294 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-config-data\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.630309 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.631919 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.635089 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.638138 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn65q\" (UniqueName: \"kubernetes.io/projected/4af5e897-6640-4718-a326-3d241fc96840-kube-api-access-wn65q\") pod \"nova-api-0\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " pod="openstack/nova-api-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.638342 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcdg9\" (UniqueName: \"kubernetes.io/projected/7276d970-f119-4ab0-9e02-69bca14fc8db-kube-api-access-gcdg9\") pod \"nova-metadata-0\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.644296 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv99w\" (UniqueName: \"kubernetes.io/projected/2798812a-4bab-4c0e-a55d-f4c6232e67ae-kube-api-access-qv99w\") pod \"nova-scheduler-0\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.673080 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.685789 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:01:52 crc kubenswrapper[4835]: I0129 16:01:52.701098 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:01:53 crc kubenswrapper[4835]: I0129 16:01:53.220609 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:01:53 crc kubenswrapper[4835]: I0129 16:01:53.227641 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:01:53 crc kubenswrapper[4835]: I0129 16:01:53.243477 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:01:53 crc kubenswrapper[4835]: W0129 16:01:53.254412 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7276d970_f119_4ab0_9e02_69bca14fc8db.slice/crio-ee423dce7bc34f6e58dadb4862b0ff8d5629af4294993b1754e8307b56ed0891 WatchSource:0}: Error finding container ee423dce7bc34f6e58dadb4862b0ff8d5629af4294993b1754e8307b56ed0891: Status 404 returned error can't find the container with id ee423dce7bc34f6e58dadb4862b0ff8d5629af4294993b1754e8307b56ed0891 Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.131656 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2798812a-4bab-4c0e-a55d-f4c6232e67ae","Type":"ContainerStarted","Data":"1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.132400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2798812a-4bab-4c0e-a55d-f4c6232e67ae","Type":"ContainerStarted","Data":"dd3a60df4ea09fe6a48863e00566846dd614aca7243e8a2c66399200f64e353b"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.134218 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7276d970-f119-4ab0-9e02-69bca14fc8db","Type":"ContainerStarted","Data":"d08761a4204863d086d26102c8b16c4d911a7f2432d7bf68d76fae131c0c304d"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.134326 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7276d970-f119-4ab0-9e02-69bca14fc8db","Type":"ContainerStarted","Data":"fc453b30f0ab0c5cd310759e4ab38ea0a273f5573f846f4eff61f9a22f858b47"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.134346 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7276d970-f119-4ab0-9e02-69bca14fc8db","Type":"ContainerStarted","Data":"ee423dce7bc34f6e58dadb4862b0ff8d5629af4294993b1754e8307b56ed0891"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.137684 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4af5e897-6640-4718-a326-3d241fc96840","Type":"ContainerStarted","Data":"5b5c6de4e9f1b971bd029a229e3ed1243080ca7b0a00aae6a96f1b1534ec82d6"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.137771 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4af5e897-6640-4718-a326-3d241fc96840","Type":"ContainerStarted","Data":"0c1e86f6e8c0ad0573e87b6e82f94e3f2d9b9cd44533cbf25cafdc4662548921"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.137789 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4af5e897-6640-4718-a326-3d241fc96840","Type":"ContainerStarted","Data":"84479377a7f192f2aa3dad9c99127a91276128178a9fdbef09d70a7009e1921a"} Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.156657 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.156626202 podStartE2EDuration="2.156626202s" podCreationTimestamp="2026-01-29 16:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:54.144882736 +0000 UTC m=+2016.337926390" watchObservedRunningTime="2026-01-29 16:01:54.156626202 +0000 UTC m=+2016.349669856" Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.180337 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.18029049 podStartE2EDuration="2.18029049s" podCreationTimestamp="2026-01-29 16:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:54.167853046 +0000 UTC m=+2016.360896700" watchObservedRunningTime="2026-01-29 16:01:54.18029049 +0000 UTC m=+2016.373334134" Jan 29 16:01:54 crc kubenswrapper[4835]: I0129 16:01:54.194587 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.194566 podStartE2EDuration="2.194566s" podCreationTimestamp="2026-01-29 16:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:01:54.186997089 +0000 UTC m=+2016.380040743" watchObservedRunningTime="2026-01-29 16:01:54.194566 +0000 UTC m=+2016.387609654" Jan 29 16:01:57 crc kubenswrapper[4835]: I0129 16:01:57.673953 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:01:57 crc kubenswrapper[4835]: I0129 16:01:57.674314 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 16:01:57 crc kubenswrapper[4835]: I0129 16:01:57.685868 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 16:02:02 crc kubenswrapper[4835]: I0129 16:02:02.674854 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:02:02 crc kubenswrapper[4835]: I0129 16:02:02.675488 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 16:02:02 crc kubenswrapper[4835]: I0129 16:02:02.686141 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 16:02:02 crc kubenswrapper[4835]: I0129 16:02:02.701399 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:02:02 crc kubenswrapper[4835]: I0129 16:02:02.701481 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 16:02:02 crc kubenswrapper[4835]: I0129 16:02:02.718358 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 16:02:03 crc kubenswrapper[4835]: I0129 16:02:03.270346 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 16:02:03 crc kubenswrapper[4835]: I0129 16:02:03.688759 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:02:03 crc kubenswrapper[4835]: I0129 16:02:03.688880 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:02:03 crc kubenswrapper[4835]: I0129 16:02:03.715918 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:02:03 crc kubenswrapper[4835]: I0129 16:02:03.715918 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:02:11 crc kubenswrapper[4835]: I0129 16:02:11.362781 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.681091 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.683969 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.688169 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.718809 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.719523 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.725837 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 16:02:12 crc kubenswrapper[4835]: I0129 16:02:12.731032 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:02:13 crc kubenswrapper[4835]: I0129 16:02:13.329024 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 16:02:13 crc kubenswrapper[4835]: I0129 16:02:13.334976 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 16:02:13 crc kubenswrapper[4835]: I0129 16:02:13.340288 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 16:02:21 crc kubenswrapper[4835]: I0129 16:02:21.614645 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:02:21 crc kubenswrapper[4835]: I0129 16:02:21.615664 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.127582 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.128383 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" containerName="openstackclient" containerID="cri-o://540959f4551d052c3cb5a4e4f683efeed182efa67d63708676587082e1b90021" gracePeriod=2 Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.147559 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.165008 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5e45-account-create-update-6dnpd"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.205166 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5e45-account-create-update-6dnpd"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.242162 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5e45-account-create-update-fkcrv"] Jan 29 16:02:30 crc kubenswrapper[4835]: E0129 16:02:30.242613 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" containerName="openstackclient" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.242629 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" containerName="openstackclient" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.242824 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" containerName="openstackclient" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.243412 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.261048 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.273200 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7580-account-create-update-jw5tj"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.283658 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.300055 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d51b909-aee8-4426-b868-d9ff54558a80-operator-scripts\") pod \"neutron-5e45-account-create-update-fkcrv\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.300116 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27hrw\" (UniqueName: \"kubernetes.io/projected/0d51b909-aee8-4426-b868-d9ff54558a80-kube-api-access-27hrw\") pod \"neutron-5e45-account-create-update-fkcrv\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.300160 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckfm\" (UniqueName: \"kubernetes.io/projected/d40748c9-6347-493a-9819-6b9606830b20-kube-api-access-wckfm\") pod \"cinder-7580-account-create-update-jw5tj\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.300252 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40748c9-6347-493a-9819-6b9606830b20-operator-scripts\") pod \"cinder-7580-account-create-update-jw5tj\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.306613 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.325018 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5e45-account-create-update-fkcrv"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.408612 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40748c9-6347-493a-9819-6b9606830b20-operator-scripts\") pod \"cinder-7580-account-create-update-jw5tj\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.408752 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d51b909-aee8-4426-b868-d9ff54558a80-operator-scripts\") pod \"neutron-5e45-account-create-update-fkcrv\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.408776 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27hrw\" (UniqueName: \"kubernetes.io/projected/0d51b909-aee8-4426-b868-d9ff54558a80-kube-api-access-27hrw\") pod \"neutron-5e45-account-create-update-fkcrv\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.408799 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckfm\" (UniqueName: \"kubernetes.io/projected/d40748c9-6347-493a-9819-6b9606830b20-kube-api-access-wckfm\") pod \"cinder-7580-account-create-update-jw5tj\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.409880 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40748c9-6347-493a-9819-6b9606830b20-operator-scripts\") pod \"cinder-7580-account-create-update-jw5tj\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.410348 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d51b909-aee8-4426-b868-d9ff54558a80-operator-scripts\") pod \"neutron-5e45-account-create-update-fkcrv\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.412481 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7580-account-create-update-jw5tj"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.429553 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q82d6"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.431030 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.438006 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.445906 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7580-account-create-update-thn8w"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.475297 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7580-account-create-update-thn8w"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.499816 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27hrw\" (UniqueName: \"kubernetes.io/projected/0d51b909-aee8-4426-b868-d9ff54558a80-kube-api-access-27hrw\") pod \"neutron-5e45-account-create-update-fkcrv\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.523171 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckfm\" (UniqueName: \"kubernetes.io/projected/d40748c9-6347-493a-9819-6b9606830b20-kube-api-access-wckfm\") pod \"cinder-7580-account-create-update-jw5tj\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.531678 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73e7878-399e-4442-9e36-a9653b4ed8c0" path="/var/lib/kubelet/pods/c73e7878-399e-4442-9e36-a9653b4ed8c0/volumes" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.542024 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd952bf2-34ba-40e4-af51-2fde882bb593" path="/var/lib/kubelet/pods/dd952bf2-34ba-40e4-af51-2fde882bb593/volumes" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.566009 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q82d6"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.571521 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.614561 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1317f4ba-c71b-4744-a2d6-c6f9be366835-operator-scripts\") pod \"root-account-create-update-q82d6\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.614626 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcfp\" (UniqueName: \"kubernetes.io/projected/1317f4ba-c71b-4744-a2d6-c6f9be366835-kube-api-access-kwcfp\") pod \"root-account-create-update-q82d6\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.742045 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1317f4ba-c71b-4744-a2d6-c6f9be366835-operator-scripts\") pod \"root-account-create-update-q82d6\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.772421 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcfp\" (UniqueName: \"kubernetes.io/projected/1317f4ba-c71b-4744-a2d6-c6f9be366835-kube-api-access-kwcfp\") pod \"root-account-create-update-q82d6\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.670340 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.742993 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1317f4ba-c71b-4744-a2d6-c6f9be366835-operator-scripts\") pod \"root-account-create-update-q82d6\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.758042 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dlwrd"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.846342 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dlwrd"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.881283 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcfp\" (UniqueName: \"kubernetes.io/projected/1317f4ba-c71b-4744-a2d6-c6f9be366835-kube-api-access-kwcfp\") pod \"root-account-create-update-q82d6\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.914938 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-919a-account-create-update-thldb"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.917234 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.930748 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.930989 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-919a-account-create-update-thldb"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.959539 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1390-account-create-update-w4tjz"] Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.960764 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.968598 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.977688 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387d017-5a11-4bf5-a9be-0895c6871f88-operator-scripts\") pod \"barbican-919a-account-create-update-thldb\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.977757 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshrt\" (UniqueName: \"kubernetes.io/projected/28feb8dd-1f29-4594-a8fd-964758136d30-kube-api-access-qshrt\") pod \"placement-1390-account-create-update-w4tjz\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.977802 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzq9n\" (UniqueName: \"kubernetes.io/projected/9387d017-5a11-4bf5-a9be-0895c6871f88-kube-api-access-gzq9n\") pod \"barbican-919a-account-create-update-thldb\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.977856 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28feb8dd-1f29-4594-a8fd-964758136d30-operator-scripts\") pod \"placement-1390-account-create-update-w4tjz\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:30 crc kubenswrapper[4835]: I0129 16:02:30.988061 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-919a-account-create-update-gk65v"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.031650 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-919a-account-create-update-gk65v"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.067496 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.079716 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387d017-5a11-4bf5-a9be-0895c6871f88-operator-scripts\") pod \"barbican-919a-account-create-update-thldb\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.079766 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshrt\" (UniqueName: \"kubernetes.io/projected/28feb8dd-1f29-4594-a8fd-964758136d30-kube-api-access-qshrt\") pod \"placement-1390-account-create-update-w4tjz\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.079792 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzq9n\" (UniqueName: \"kubernetes.io/projected/9387d017-5a11-4bf5-a9be-0895c6871f88-kube-api-access-gzq9n\") pod \"barbican-919a-account-create-update-thldb\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.079823 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28feb8dd-1f29-4594-a8fd-964758136d30-operator-scripts\") pod \"placement-1390-account-create-update-w4tjz\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.080615 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28feb8dd-1f29-4594-a8fd-964758136d30-operator-scripts\") pod \"placement-1390-account-create-update-w4tjz\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.080893 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387d017-5a11-4bf5-a9be-0895c6871f88-operator-scripts\") pod \"barbican-919a-account-create-update-thldb\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.087729 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1390-account-create-update-w4tjz"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.129837 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kkwct"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.137271 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshrt\" (UniqueName: \"kubernetes.io/projected/28feb8dd-1f29-4594-a8fd-964758136d30-kube-api-access-qshrt\") pod \"placement-1390-account-create-update-w4tjz\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.193064 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzq9n\" (UniqueName: \"kubernetes.io/projected/9387d017-5a11-4bf5-a9be-0895c6871f88-kube-api-access-gzq9n\") pod \"barbican-919a-account-create-update-thldb\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.214056 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kkwct"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.247614 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.248502 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="openstack-network-exporter" containerID="cri-o://22e7fb3c4f9ff91459d857bf70ed22149138014d698e91d8efaeea0dac7b2a71" gracePeriod=300 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.251604 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.277637 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.294839 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1390-account-create-update-5xf62"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.295363 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.305732 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1390-account-create-update-5xf62"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.322819 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a171-account-create-update-5vthf"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.341552 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6wzhv"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.366712 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6wzhv"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.402143 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a171-account-create-update-5vthf"] Jan 29 16:02:31 crc kubenswrapper[4835]: E0129 16:02:31.411565 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:02:31 crc kubenswrapper[4835]: E0129 16:02:31.411657 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data podName:427d05dd-22a3-4557-b9b3-9b5a39ea4662 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:31.91163696 +0000 UTC m=+2054.104680654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data") pod "rabbitmq-server-0" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662") : configmap "rabbitmq-config-data" not found Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.422527 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pzf56"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.437333 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pzf56"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.511112 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-370b-account-create-update-gspt4"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.561079 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.561325 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="ovn-northd" containerID="cri-o://49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6" gracePeriod=30 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.561780 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="openstack-network-exporter" containerID="cri-o://f8216ff353bcb5dadd1afc0099c19fd068e4e203e3468e4a67e19d4e8e285ff2" gracePeriod=30 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.591619 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-370b-account-create-update-gspt4"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.614355 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="ovsdbserver-nb" containerID="cri-o://63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1" gracePeriod=300 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.644092 4835 generic.go:334] "Generic (PLEG): container finished" podID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerID="22e7fb3c4f9ff91459d857bf70ed22149138014d698e91d8efaeea0dac7b2a71" exitCode=2 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.644143 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3","Type":"ContainerDied","Data":"22e7fb3c4f9ff91459d857bf70ed22149138014d698e91d8efaeea0dac7b2a71"} Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.662022 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-22ql4"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.694525 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gnnnq"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.733333 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-22ql4"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.766393 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-17d3-account-create-update-7zvtj"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.788374 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gnnnq"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.815679 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-17d3-account-create-update-7zvtj"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.855885 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.856170 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="cinder-scheduler" containerID="cri-o://a28a5cec9281d7421cb36e3f7d37a107045c2b59a77ddfcbfcbee35f116de902" gracePeriod=30 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.856663 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="probe" containerID="cri-o://570734336d9d7b25a8857e635d8a9706c455de491fa31af1b769d6d6503b8994" gracePeriod=30 Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.876530 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-867cd545c7-74wd6"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.876814 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerName="dnsmasq-dns" containerID="cri-o://a4bebb7a38805e14b3909ac1eed6f7c8efdae09d6f1fad23656e12533bd81e37" gracePeriod=10 Jan 29 16:02:31 crc kubenswrapper[4835]: E0129 16:02:31.932438 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:02:31 crc kubenswrapper[4835]: E0129 16:02:31.932522 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data podName:427d05dd-22a3-4557-b9b3-9b5a39ea4662 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:32.932507456 +0000 UTC m=+2055.125551110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data") pod "rabbitmq-server-0" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662") : configmap "rabbitmq-config-data" not found Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.969585 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gpfb4"] Jan 29 16:02:31 crc kubenswrapper[4835]: I0129 16:02:31.975594 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-gpfb4" podUID="7e7c01a6-4ad4-4b01-ab23-176e12337f1d" containerName="openstack-network-exporter" containerID="cri-o://ab287bc3d69fa692e9cd2585dfd495de5681f84ae38e21c4cac121117ad30151" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.001677 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.015657 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="openstack-network-exporter" containerID="cri-o://1c460abfe7dfc5d912ef11f69c1789679aced0cf5dd1d1c2a054756009ac4f3a" gracePeriod=300 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.096827 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-646f7c7f9f-p9x92"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.099460 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-646f7c7f9f-p9x92" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-api" containerID="cri-o://d1037751e301323740b1f50d8b6f17e6d9cc7df3812be6e85cd54495a28354ad" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.100131 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-646f7c7f9f-p9x92" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-httpd" containerID="cri-o://af2ec1e150789b7833e77af35517e4ab23bbeb30ea3652132914ac5c56504042" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.253760 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="ovsdbserver-sb" containerID="cri-o://6bdfd393d80d8cc5d2c247ae44820ea1572128277c82ec3956cbbb2b6731b3c4" gracePeriod=300 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.259964 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-v49xf"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.280988 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wvpc4"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.301742 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.302049 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api-log" containerID="cri-o://7d39271af2af9a293c7e1b987bdaaac49715c41571364d6fe8ad55a46391979e" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.302732 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api" containerID="cri-o://5d95c6db7e0e718cab2c5db4b47a52f162d8fe1de2e34a697fb6ad3fbe3a9f22" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.318663 4835 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:02:32 crc kubenswrapper[4835]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: if [ -n "neutron" ]; then Jan 29 16:02:32 crc kubenswrapper[4835]: GRANT_DATABASE="neutron" Jan 29 16:02:32 crc kubenswrapper[4835]: else Jan 29 16:02:32 crc kubenswrapper[4835]: GRANT_DATABASE="*" Jan 29 16:02:32 crc kubenswrapper[4835]: fi Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: # going for maximum compatibility here: Jan 29 16:02:32 crc kubenswrapper[4835]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 16:02:32 crc kubenswrapper[4835]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 16:02:32 crc kubenswrapper[4835]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 16:02:32 crc kubenswrapper[4835]: # support updates Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: $MYSQL_CMD < logger="UnhandledError" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.318897 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3dd7-account-create-update-k8qqd"] Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.320260 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-5e45-account-create-update-fkcrv" podUID="0d51b909-aee8-4426-b868-d9ff54558a80" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.345119 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7580-account-create-update-jw5tj"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.355520 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ggv7m"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.371582 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3dd7-account-create-update-k8qqd"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.417321 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ggv7m"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.428340 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6p7g"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.445373 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-ltvns"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.459109 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m6p7g"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.468504 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-ltvns"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.477539 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-z2g6x"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.487950 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-z2g6x"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.571088 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0830b0ce-038c-41ff-bed9-e4efa49ae35f" path="/var/lib/kubelet/pods/0830b0ce-038c-41ff-bed9-e4efa49ae35f/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.572030 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25eb2a12-67b0-4a75-9923-9b31dec57982" path="/var/lib/kubelet/pods/25eb2a12-67b0-4a75-9923-9b31dec57982/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.572608 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e05047-7318-4cd2-b46d-6f668be25f1d" path="/var/lib/kubelet/pods/28e05047-7318-4cd2-b46d-6f668be25f1d/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.573824 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31877ce4-9de7-4a70-a383-ea7143b6b61d" path="/var/lib/kubelet/pods/31877ce4-9de7-4a70-a383-ea7143b6b61d/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.574460 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4899ae-0dd0-4a8d-aeb2-65738205364c" path="/var/lib/kubelet/pods/3c4899ae-0dd0-4a8d-aeb2-65738205364c/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.575009 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40d3c372-32e0-4fa4-8b75-a029eb3c27d6" path="/var/lib/kubelet/pods/40d3c372-32e0-4fa4-8b75-a029eb3c27d6/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.575609 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45494e83-6ddf-4d30-8c01-2ffc335849d3" path="/var/lib/kubelet/pods/45494e83-6ddf-4d30-8c01-2ffc335849d3/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.581200 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5379f6-0b77-4645-9cc1-aa6027b0ade8" path="/var/lib/kubelet/pods/4a5379f6-0b77-4645-9cc1-aa6027b0ade8/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.581777 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a710ef-d5fc-4912-967c-ea1ca55cc761" path="/var/lib/kubelet/pods/55a710ef-d5fc-4912-967c-ea1ca55cc761/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.582359 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a23091d-55d5-469e-9428-34435286ef9c" path="/var/lib/kubelet/pods/9a23091d-55d5-469e-9428-34435286ef9c/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.583593 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa656191-3c36-4639-9e82-de113cae7c7d" path="/var/lib/kubelet/pods/aa656191-3c36-4639-9e82-de113cae7c7d/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.584304 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be114b44-2cf1-4af2-93ae-05b8e56c83c9" path="/var/lib/kubelet/pods/be114b44-2cf1-4af2-93ae-05b8e56c83c9/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.585148 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00de9b8-ac4d-473d-b1fe-0cfde39ed538" path="/var/lib/kubelet/pods/d00de9b8-ac4d-473d-b1fe-0cfde39ed538/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.587635 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90603cb-bccf-4838-b077-7d1fe9da2c34" path="/var/lib/kubelet/pods/d90603cb-bccf-4838-b077-7d1fe9da2c34/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.588731 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea32b2d-5036-4574-8144-a3e3dd02fdf6" path="/var/lib/kubelet/pods/eea32b2d-5036-4574-8144-a3e3dd02fdf6/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.589463 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9f8da0-19c5-4816-ae04-1981898172d7" path="/var/lib/kubelet/pods/fd9f8da0-19c5-4816-ae04-1981898172d7/volumes" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590563 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590604 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2n9r8"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590620 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2n9r8"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590638 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590654 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5e45-account-create-update-fkcrv"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590670 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bcd5f5d96-9c2ft"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.590944 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-bcd5f5d96-9c2ft" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-log" containerID="cri-o://c65cca305647c572c443a687b71194272f7144b26cee020f19edc85b55bf49d5" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.591110 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-bcd5f5d96-9c2ft" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-api" containerID="cri-o://b0f9698f122586483286669c9c68b041121959ba078f0f206adad9e397fd176f" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.591936 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-server" containerID="cri-o://7d90e6ba25c8c94653671eba222fef93b9b2446541dcca56e78ee9231c96445a" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592004 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-expirer" containerID="cri-o://0e9a70d0d3e573c08f1f05fea5e81d31b394892a0871babe4ea8df23e129049a" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592017 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-server" containerID="cri-o://96326193929bfae4736118da0da7aa31ca82117388834ae8a9d814377a2df442" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592081 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-reaper" containerID="cri-o://ea3655dc8836f21bb4e2d129275d9c13c513c97506d8486147669096ca156531" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592109 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="swift-recon-cron" containerID="cri-o://497362da3cdc7b0ac0586533a3f77b36a12f4d7072bc9dcc5da2221d2561e5d3" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592138 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-auditor" containerID="cri-o://29693bd067c29bcef68d5b953a18c82bdc942785a44206abd862b321b7c77fb6" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592174 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-updater" containerID="cri-o://f95d08291a9c199d391d69a68653c5c74f0e8e27762eb83acb0bc55e0d66b495" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592201 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-replicator" containerID="cri-o://f47a12bee37ff4ad1416b9e4d6055c6ceddc1d33304721e7f0bcb0d682158067" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592177 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="rsync" containerID="cri-o://7ca1893c15edffa73e896d53f7933f0f6d17a150336d46d4a2fa86bcaf2cfdab" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592252 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-auditor" containerID="cri-o://8fd88a76264762f5dddc732bba7fb82a2b2f0305de000da2121400725f7c2608" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592262 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-replicator" containerID="cri-o://474104e42dc8f9996c3980ed13200c007cff2281dd29f3028ef1e3636cd57e4d" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592274 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-auditor" containerID="cri-o://ad890ccba45d4c845dc515bcfedc4d4060e8c99cb10da916ae788e373d48865b" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592324 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-updater" containerID="cri-o://f3c885696e6da2f898a5b8c49181385ab9235ba5ae191d51cf760e67509c466e" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592353 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-httpd" containerID="cri-o://024c1b9e43847d2606d0aaf1470ab7b043f475eb17ce51f35fa0cf4e402923e1" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592323 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-log" containerID="cri-o://34a15120c83268cc3829764d188bfd5b463d82edd23441f010b64a781fa805fd" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592400 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-server" containerID="cri-o://cb6a17d1b963b37fe45bdd33c7dcb4010669d718352d5ad5ac97fb197646084b" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.592369 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-replicator" containerID="cri-o://1cb7a2f57c5116dd63939c6513e77a00a69a8410f016136f8f3113c4eaefb685" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.629829 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.643207 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-log" containerID="cri-o://4bddd694b3a71f4c157e46ee4469e5cde7015e8f59a7762ccb3f61c557f57358" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.643739 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-httpd" containerID="cri-o://da115e8709897068c943f4fb977b6a69f1f71d0f9c94591378a80a37d0e541cb" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.660160 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9w77s"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.693937 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9w77s"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.714893 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-d9gpw"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.741635 4835 generic.go:334] "Generic (PLEG): container finished" podID="fda576f5-4653-4cad-82de-c734c6c7c867" containerID="af2ec1e150789b7833e77af35517e4ab23bbeb30ea3652132914ac5c56504042" exitCode=0 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.741912 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-646f7c7f9f-p9x92" event={"ID":"fda576f5-4653-4cad-82de-c734c6c7c867","Type":"ContainerDied","Data":"af2ec1e150789b7833e77af35517e4ab23bbeb30ea3652132914ac5c56504042"} Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.770983 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.784849 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_14f53619-6432-463f-b0df-5998c1f3298d/ovsdbserver-sb/0.log" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.784916 4835 generic.go:334] "Generic (PLEG): container finished" podID="14f53619-6432-463f-b0df-5998c1f3298d" containerID="1c460abfe7dfc5d912ef11f69c1789679aced0cf5dd1d1c2a054756009ac4f3a" exitCode=2 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.784935 4835 generic.go:334] "Generic (PLEG): container finished" podID="14f53619-6432-463f-b0df-5998c1f3298d" containerID="6bdfd393d80d8cc5d2c247ae44820ea1572128277c82ec3956cbbb2b6731b3c4" exitCode=143 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.784989 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14f53619-6432-463f-b0df-5998c1f3298d","Type":"ContainerDied","Data":"1c460abfe7dfc5d912ef11f69c1789679aced0cf5dd1d1c2a054756009ac4f3a"} Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.785046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14f53619-6432-463f-b0df-5998c1f3298d","Type":"ContainerDied","Data":"6bdfd393d80d8cc5d2c247ae44820ea1572128277c82ec3956cbbb2b6731b3c4"} Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.807543 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-d9gpw"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.827631 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerID="f8216ff353bcb5dadd1afc0099c19fd068e4e203e3468e4a67e19d4e8e285ff2" exitCode=2 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.827740 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0d707fad-0945-4d95-a1ac-3fcf77e60566","Type":"ContainerDied","Data":"f8216ff353bcb5dadd1afc0099c19fd068e4e203e3468e4a67e19d4e8e285ff2"} Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.837652 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5f7ddf4db5-rxpvj"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.837878 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker-log" containerID="cri-o://4038d593222cca32b6f2d81c1b1addde266dc5987b9461c587363594ca454f06" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.838239 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker" containerID="cri-o://1d1e2e8d5b3cc6c604736856ef650f21234223173aea5d23d04e47451a48e118" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.868718 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e45-account-create-update-fkcrv" event={"ID":"0d51b909-aee8-4426-b868-d9ff54558a80","Type":"ContainerStarted","Data":"3c70ca2b96b930841558289b6b216396f497d96256957b55dcf21a34e76ca758"} Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.892938 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.893156 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-log" containerID="cri-o://0c1e86f6e8c0ad0573e87b6e82f94e3f2d9b9cd44533cbf25cafdc4662548921" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.893272 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-api" containerID="cri-o://5b5c6de4e9f1b971bd029a229e3ed1243080ca7b0a00aae6a96f1b1534ec82d6" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.899772 4835 generic.go:334] "Generic (PLEG): container finished" podID="95c9a919-e32a-49cd-b097-79fe61d7ec64" containerID="540959f4551d052c3cb5a4e4f683efeed182efa67d63708676587082e1b90021" exitCode=137 Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.912322 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.912396 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data podName:372d1dd7-8b24-4652-b6bb-bce75af50e4e nodeName:}" failed. No retries permitted until 2026-01-29 16:02:33.412375693 +0000 UTC m=+2055.605419427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data") pod "rabbitmq-cell1-server-0" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.917386 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1 is running failed: container process not found" containerID="63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.924968 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1 is running failed: container process not found" containerID="63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.928734 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1 is running failed: container process not found" containerID="63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.928787 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="ovsdbserver-nb" Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.934991 4835 generic.go:334] "Generic (PLEG): container finished" podID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerID="7d39271af2af9a293c7e1b987bdaaac49715c41571364d6fe8ad55a46391979e" exitCode=143 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.935069 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def","Type":"ContainerDied","Data":"7d39271af2af9a293c7e1b987bdaaac49715c41571364d6fe8ad55a46391979e"} Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.947034 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-564bdc756-f7cnh"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.947291 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" containerID="cri-o://1fa77cf0ee2adb7de043210724e572a07803b3db007e915a2b21ef69b093b4fb" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.947787 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" containerID="cri-o://d537da9c327d41d1628f1e29eec86d0b3bf610d51630148342dc5dae69ebf200" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.991524 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-748568c9b6-g5d2z"] Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.991795 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener-log" containerID="cri-o://abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: I0129 16:02:32.992450 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener" containerID="cri-o://cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28" gracePeriod=30 Jan 29 16:02:32 crc kubenswrapper[4835]: E0129 16:02:32.999588 4835 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:02:32 crc kubenswrapper[4835]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: if [ -n "neutron" ]; then Jan 29 16:02:32 crc kubenswrapper[4835]: GRANT_DATABASE="neutron" Jan 29 16:02:32 crc kubenswrapper[4835]: else Jan 29 16:02:32 crc kubenswrapper[4835]: GRANT_DATABASE="*" Jan 29 16:02:32 crc kubenswrapper[4835]: fi Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: # going for maximum compatibility here: Jan 29 16:02:32 crc kubenswrapper[4835]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 16:02:32 crc kubenswrapper[4835]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 16:02:32 crc kubenswrapper[4835]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 16:02:32 crc kubenswrapper[4835]: # support updates Jan 29 16:02:32 crc kubenswrapper[4835]: Jan 29 16:02:32 crc kubenswrapper[4835]: $MYSQL_CMD < logger="UnhandledError" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.005668 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-5e45-account-create-update-fkcrv" podUID="0d51b909-aee8-4426-b868-d9ff54558a80" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.008516 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ceda64f-39c5-487e-9eb2-7c858e7d4ac3/ovsdbserver-nb/0.log" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.008560 4835 generic.go:334] "Generic (PLEG): container finished" podID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerID="63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1" exitCode=143 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.008629 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3","Type":"ContainerDied","Data":"63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1"} Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.012822 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.013163 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data podName:427d05dd-22a3-4557-b9b3-9b5a39ea4662 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:35.013147958 +0000 UTC m=+2057.206191612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data") pod "rabbitmq-server-0" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662") : configmap "rabbitmq-config-data" not found Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.042586 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-919a-account-create-update-thldb"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.085424 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gpfb4_7e7c01a6-4ad4-4b01-ab23-176e12337f1d/openstack-network-exporter/0.log" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.085497 4835 generic.go:334] "Generic (PLEG): container finished" podID="7e7c01a6-4ad4-4b01-ab23-176e12337f1d" containerID="ab287bc3d69fa692e9cd2585dfd495de5681f84ae38e21c4cac121117ad30151" exitCode=2 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.085589 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gpfb4" event={"ID":"7e7c01a6-4ad4-4b01-ab23-176e12337f1d","Type":"ContainerDied","Data":"ab287bc3d69fa692e9cd2585dfd495de5681f84ae38e21c4cac121117ad30151"} Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.118575 4835 generic.go:334] "Generic (PLEG): container finished" podID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerID="a4bebb7a38805e14b3909ac1eed6f7c8efdae09d6f1fad23656e12533bd81e37" exitCode=0 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.118630 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" event={"ID":"7b92181d-1298-4d3c-93d8-92ee9abb2e1b","Type":"ContainerDied","Data":"a4bebb7a38805e14b3909ac1eed6f7c8efdae09d6f1fad23656e12533bd81e37"} Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.127543 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5e45-account-create-update-fkcrv"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.202853 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wpbd5"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.285484 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wpbd5"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.310815 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ceda64f-39c5-487e-9eb2-7c858e7d4ac3/ovsdbserver-nb/0.log" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.310918 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.330256 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.398578 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9b4g2"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.431707 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9b4g2"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432433 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-scripts\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432506 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm87h\" (UniqueName: \"kubernetes.io/projected/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-kube-api-access-wm87h\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432538 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-config\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432621 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdb-rundir\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432664 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432688 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432726 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-combined-ca-bundle\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.432747 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdbserver-nb-tls-certs\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.433151 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.433199 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data podName:372d1dd7-8b24-4652-b6bb-bce75af50e4e nodeName:}" failed. No retries permitted until 2026-01-29 16:02:34.433186637 +0000 UTC m=+2056.626230291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data") pod "rabbitmq-cell1-server-0" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.435578 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-scripts" (OuterVolumeSpecName: "scripts") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.436493 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-config" (OuterVolumeSpecName: "config") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.437988 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.445830 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-kube-api-access-wm87h" (OuterVolumeSpecName: "kube-api-access-wm87h") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "kube-api-access-wm87h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.451414 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.451512 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.451800 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-log" containerID="cri-o://fc453b30f0ab0c5cd310759e4ab38ea0a273f5573f846f4eff61f9a22f858b47" gracePeriod=30 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.452424 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-metadata" containerID="cri-o://d08761a4204863d086d26102c8b16c4d911a7f2432d7bf68d76fae131c0c304d" gracePeriod=30 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.489866 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wl44f"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.509584 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wl44f"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.522739 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.526586 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1390-account-create-update-w4tjz"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.535660 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.535701 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm87h\" (UniqueName: \"kubernetes.io/projected/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-kube-api-access-wm87h\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.535713 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.535723 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.535746 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.544985 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lqgfx"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.558730 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lqgfx"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.581610 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q82d6"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.601801 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.605975 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" containerID="cri-o://d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" gracePeriod=29 Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.630671 4835 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:02:33 crc kubenswrapper[4835]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: if [ -n "" ]; then Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="" Jan 29 16:02:33 crc kubenswrapper[4835]: else Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="*" Jan 29 16:02:33 crc kubenswrapper[4835]: fi Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: # going for maximum compatibility here: Jan 29 16:02:33 crc kubenswrapper[4835]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 16:02:33 crc kubenswrapper[4835]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 16:02:33 crc kubenswrapper[4835]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 16:02:33 crc kubenswrapper[4835]: # support updates Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: $MYSQL_CMD < logger="UnhandledError" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.632913 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-q82d6" podUID="1317f4ba-c71b-4744-a2d6-c6f9be366835" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.636652 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-nb\") pod \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.636741 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-config\") pod \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.636809 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-swift-storage-0\") pod \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.636875 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-sb\") pod \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.637144 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98d9h\" (UniqueName: \"kubernetes.io/projected/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-kube-api-access-98d9h\") pod \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.637183 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-svc\") pod \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\" (UID: \"7b92181d-1298-4d3c-93d8-92ee9abb2e1b\") " Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.637747 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.641813 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.642756 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f082fcc3-a213-48ea-a0f0-359e93d9ce1c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f" gracePeriod=30 Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.656247 4835 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:02:33 crc kubenswrapper[4835]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: if [ -n "barbican" ]; then Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="barbican" Jan 29 16:02:33 crc kubenswrapper[4835]: else Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="*" Jan 29 16:02:33 crc kubenswrapper[4835]: fi Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: # going for maximum compatibility here: Jan 29 16:02:33 crc kubenswrapper[4835]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 16:02:33 crc kubenswrapper[4835]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 16:02:33 crc kubenswrapper[4835]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 16:02:33 crc kubenswrapper[4835]: # support updates Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: $MYSQL_CMD < logger="UnhandledError" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.659355 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.668892 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.674188 4835 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:02:33 crc kubenswrapper[4835]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: if [ -n "placement" ]; then Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="placement" Jan 29 16:02:33 crc kubenswrapper[4835]: else Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="*" Jan 29 16:02:33 crc kubenswrapper[4835]: fi Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: # going for maximum compatibility here: Jan 29 16:02:33 crc kubenswrapper[4835]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 16:02:33 crc kubenswrapper[4835]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 16:02:33 crc kubenswrapper[4835]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 16:02:33 crc kubenswrapper[4835]: # support updates Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: $MYSQL_CMD < logger="UnhandledError" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.675884 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-919a-account-create-update-thldb" podUID="9387d017-5a11-4bf5-a9be-0895c6871f88" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.677761 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-1390-account-create-update-w4tjz" podUID="28feb8dd-1f29-4594-a8fd-964758136d30" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.683353 4835 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 29 16:02:33 crc kubenswrapper[4835]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: if [ -n "cinder" ]; then Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="cinder" Jan 29 16:02:33 crc kubenswrapper[4835]: else Jan 29 16:02:33 crc kubenswrapper[4835]: GRANT_DATABASE="*" Jan 29 16:02:33 crc kubenswrapper[4835]: fi Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: # going for maximum compatibility here: Jan 29 16:02:33 crc kubenswrapper[4835]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 29 16:02:33 crc kubenswrapper[4835]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 29 16:02:33 crc kubenswrapper[4835]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 29 16:02:33 crc kubenswrapper[4835]: # support updates Jan 29 16:02:33 crc kubenswrapper[4835]: Jan 29 16:02:33 crc kubenswrapper[4835]: $MYSQL_CMD < logger="UnhandledError" Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.684995 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-7580-account-create-update-jw5tj" podUID="d40748c9-6347-493a-9819-6b9606830b20" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.722707 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.736950 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="galera" containerID="cri-o://de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" gracePeriod=30 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.740003 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.781966 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-kube-api-access-98d9h" (OuterVolumeSpecName: "kube-api-access-98d9h") pod "7b92181d-1298-4d3c-93d8-92ee9abb2e1b" (UID: "7b92181d-1298-4d3c-93d8-92ee9abb2e1b"). InnerVolumeSpecName "kube-api-access-98d9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.794034 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q82d6"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.829059 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7580-account-create-update-jw5tj"] Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.844304 4835 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 16:02:33 crc kubenswrapper[4835]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 16:02:33 crc kubenswrapper[4835]: + source /usr/local/bin/container-scripts/functions Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNBridge=br-int Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNRemote=tcp:localhost:6642 Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNEncapType=geneve Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNAvailabilityZones= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ EnableChassisAsGateway=true Jan 29 16:02:33 crc kubenswrapper[4835]: ++ PhysicalNetworks= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNHostName= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 16:02:33 crc kubenswrapper[4835]: ++ ovs_dir=/var/lib/openvswitch Jan 29 16:02:33 crc kubenswrapper[4835]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 16:02:33 crc kubenswrapper[4835]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 16:02:33 crc kubenswrapper[4835]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + sleep 0.5 Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + sleep 0.5 Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + cleanup_ovsdb_server_semaphore Jan 29 16:02:33 crc kubenswrapper[4835]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:02:33 crc kubenswrapper[4835]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 16:02:33 crc kubenswrapper[4835]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-v49xf" message=< Jan 29 16:02:33 crc kubenswrapper[4835]: Exiting ovsdb-server (5) ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 16:02:33 crc kubenswrapper[4835]: + source /usr/local/bin/container-scripts/functions Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNBridge=br-int Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNRemote=tcp:localhost:6642 Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNEncapType=geneve Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNAvailabilityZones= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ EnableChassisAsGateway=true Jan 29 16:02:33 crc kubenswrapper[4835]: ++ PhysicalNetworks= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNHostName= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 16:02:33 crc kubenswrapper[4835]: ++ ovs_dir=/var/lib/openvswitch Jan 29 16:02:33 crc kubenswrapper[4835]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 16:02:33 crc kubenswrapper[4835]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 16:02:33 crc kubenswrapper[4835]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + sleep 0.5 Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + sleep 0.5 Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + cleanup_ovsdb_server_semaphore Jan 29 16:02:33 crc kubenswrapper[4835]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:02:33 crc kubenswrapper[4835]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 16:02:33 crc kubenswrapper[4835]: > Jan 29 16:02:33 crc kubenswrapper[4835]: E0129 16:02:33.844348 4835 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 16:02:33 crc kubenswrapper[4835]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 29 16:02:33 crc kubenswrapper[4835]: + source /usr/local/bin/container-scripts/functions Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNBridge=br-int Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNRemote=tcp:localhost:6642 Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNEncapType=geneve Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNAvailabilityZones= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ EnableChassisAsGateway=true Jan 29 16:02:33 crc kubenswrapper[4835]: ++ PhysicalNetworks= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ OVNHostName= Jan 29 16:02:33 crc kubenswrapper[4835]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 29 16:02:33 crc kubenswrapper[4835]: ++ ovs_dir=/var/lib/openvswitch Jan 29 16:02:33 crc kubenswrapper[4835]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 29 16:02:33 crc kubenswrapper[4835]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 29 16:02:33 crc kubenswrapper[4835]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + sleep 0.5 Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + sleep 0.5 Jan 29 16:02:33 crc kubenswrapper[4835]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 29 16:02:33 crc kubenswrapper[4835]: + cleanup_ovsdb_server_semaphore Jan 29 16:02:33 crc kubenswrapper[4835]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 29 16:02:33 crc kubenswrapper[4835]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 29 16:02:33 crc kubenswrapper[4835]: > pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" containerID="cri-o://9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.844382 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" containerID="cri-o://9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" gracePeriod=29 Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.850415 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98d9h\" (UniqueName: \"kubernetes.io/projected/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-kube-api-access-98d9h\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.921537 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1390-account-create-update-w4tjz"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.953563 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-919a-account-create-update-thldb"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.979873 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b92181d-1298-4d3c-93d8-92ee9abb2e1b" (UID: "7b92181d-1298-4d3c-93d8-92ee9abb2e1b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.983307 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:02:33 crc kubenswrapper[4835]: I0129 16:02:33.983583 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" containerName="nova-scheduler-scheduler" containerID="cri-o://1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473" gracePeriod=30 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.017764 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.017990 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerName="nova-cell1-conductor-conductor" containerID="cri-o://3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" gracePeriod=30 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.036229 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhhrj"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.038699 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhhrj"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.058525 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.058600 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7cvqt"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.058826 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" gracePeriod=30 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.071694 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7cvqt"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.073707 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.079343 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs\") pod \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\" (UID: \"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.079956 4835 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: W0129 16:02:34.080101 4835 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3/volumes/kubernetes.io~secret/metrics-certs-tls-certs Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.080166 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.080923 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="rabbitmq" containerID="cri-o://3ee09d0ab84f95e0d34e071b677b6bc38c1151190177df01369eb3c4ce330847" gracePeriod=604800 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.102397 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7b92181d-1298-4d3c-93d8-92ee9abb2e1b" (UID: "7b92181d-1298-4d3c-93d8-92ee9abb2e1b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.102574 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7b92181d-1298-4d3c-93d8-92ee9abb2e1b" (UID: "7b92181d-1298-4d3c-93d8-92ee9abb2e1b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.102598 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7b92181d-1298-4d3c-93d8-92ee9abb2e1b" (UID: "7b92181d-1298-4d3c-93d8-92ee9abb2e1b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.126305 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-config" (OuterVolumeSpecName: "config") pod "7b92181d-1298-4d3c-93d8-92ee9abb2e1b" (UID: "7b92181d-1298-4d3c-93d8-92ee9abb2e1b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.152802 4835 generic.go:334] "Generic (PLEG): container finished" podID="921df059-21ec-4ba7-9093-bbcc7c788991" containerID="1d1e2e8d5b3cc6c604736856ef650f21234223173aea5d23d04e47451a48e118" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.152833 4835 generic.go:334] "Generic (PLEG): container finished" podID="921df059-21ec-4ba7-9093-bbcc7c788991" containerID="4038d593222cca32b6f2d81c1b1addde266dc5987b9461c587363594ca454f06" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.152886 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" event={"ID":"921df059-21ec-4ba7-9093-bbcc7c788991","Type":"ContainerDied","Data":"1d1e2e8d5b3cc6c604736856ef650f21234223173aea5d23d04e47451a48e118"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.152930 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" event={"ID":"921df059-21ec-4ba7-9093-bbcc7c788991","Type":"ContainerDied","Data":"4038d593222cca32b6f2d81c1b1addde266dc5987b9461c587363594ca454f06"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.160123 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gpfb4_7e7c01a6-4ad4-4b01-ab23-176e12337f1d/openstack-network-exporter/0.log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.160204 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.160851 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" (UID: "6ceda64f-39c5-487e-9eb2-7c858e7d4ac3"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.166907 4835 generic.go:334] "Generic (PLEG): container finished" podID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerID="4bddd694b3a71f4c157e46ee4469e5cde7015e8f59a7762ccb3f61c557f57358" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.167002 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2","Type":"ContainerDied","Data":"4bddd694b3a71f4c157e46ee4469e5cde7015e8f59a7762ccb3f61c557f57358"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.171250 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q82d6" event={"ID":"1317f4ba-c71b-4744-a2d6-c6f9be366835","Type":"ContainerStarted","Data":"6637c3304993ebe058998c4fef33b26a61272c65146293b0d7e1c38e42822b53"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.180827 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovn-rundir\") pod \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.180949 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-metrics-certs-tls-certs\") pod \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.181129 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovs-rundir\") pod \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.181300 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mp97\" (UniqueName: \"kubernetes.io/projected/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-kube-api-access-7mp97\") pod \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.181390 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-combined-ca-bundle\") pod \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.181432 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-config\") pod \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\" (UID: \"7e7c01a6-4ad4-4b01-ab23-176e12337f1d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.182127 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.182146 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.182158 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.182170 4835 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.182181 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.182191 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7b92181d-1298-4d3c-93d8-92ee9abb2e1b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.183092 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "7e7c01a6-4ad4-4b01-ab23-176e12337f1d" (UID: "7e7c01a6-4ad4-4b01-ab23-176e12337f1d"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.183127 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "7e7c01a6-4ad4-4b01-ab23-176e12337f1d" (UID: "7e7c01a6-4ad4-4b01-ab23-176e12337f1d"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.183172 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-config" (OuterVolumeSpecName: "config") pod "7e7c01a6-4ad4-4b01-ab23-176e12337f1d" (UID: "7e7c01a6-4ad4-4b01-ab23-176e12337f1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.185026 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7580-account-create-update-jw5tj" event={"ID":"d40748c9-6347-493a-9819-6b9606830b20","Type":"ContainerStarted","Data":"f763f6cd6c4fa9680c9979eb7e5f890ed46ff6b7d8792cbfbfcb2f2252a7d031"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.192073 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-kube-api-access-7mp97" (OuterVolumeSpecName: "kube-api-access-7mp97") pod "7e7c01a6-4ad4-4b01-ab23-176e12337f1d" (UID: "7e7c01a6-4ad4-4b01-ab23-176e12337f1d"). InnerVolumeSpecName "kube-api-access-7mp97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.230106 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-919a-account-create-update-thldb" event={"ID":"9387d017-5a11-4bf5-a9be-0895c6871f88","Type":"ContainerStarted","Data":"7aa07e38f636814516b6dfca3334d0682216e67169f7da206d1b2ce718d8f4e8"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.238588 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e7c01a6-4ad4-4b01-ab23-176e12337f1d" (UID: "7e7c01a6-4ad4-4b01-ab23-176e12337f1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.255332 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc934dd0-048d-4030-986d-b76e0b966d73" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.255400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerDied","Data":"9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.255704 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.283778 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config\") pod \"95c9a919-e32a-49cd-b097-79fe61d7ec64\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.283811 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config-secret\") pod \"95c9a919-e32a-49cd-b097-79fe61d7ec64\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.283970 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bthrm\" (UniqueName: \"kubernetes.io/projected/95c9a919-e32a-49cd-b097-79fe61d7ec64-kube-api-access-bthrm\") pod \"95c9a919-e32a-49cd-b097-79fe61d7ec64\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.284108 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-combined-ca-bundle\") pod \"95c9a919-e32a-49cd-b097-79fe61d7ec64\" (UID: \"95c9a919-e32a-49cd-b097-79fe61d7ec64\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.284857 4835 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.284878 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mp97\" (UniqueName: \"kubernetes.io/projected/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-kube-api-access-7mp97\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.284895 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.284907 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.284920 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.292807 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c9a919-e32a-49cd-b097-79fe61d7ec64-kube-api-access-bthrm" (OuterVolumeSpecName: "kube-api-access-bthrm") pod "95c9a919-e32a-49cd-b097-79fe61d7ec64" (UID: "95c9a919-e32a-49cd-b097-79fe61d7ec64"). InnerVolumeSpecName "kube-api-access-bthrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.324103 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_14f53619-6432-463f-b0df-5998c1f3298d/ovsdbserver-sb/0.log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.324182 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.353804 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "95c9a919-e32a-49cd-b097-79fe61d7ec64" (UID: "95c9a919-e32a-49cd-b097-79fe61d7ec64"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375584 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="7ca1893c15edffa73e896d53f7933f0f6d17a150336d46d4a2fa86bcaf2cfdab" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375615 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="0e9a70d0d3e573c08f1f05fea5e81d31b394892a0871babe4ea8df23e129049a" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375623 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="f3c885696e6da2f898a5b8c49181385ab9235ba5ae191d51cf760e67509c466e" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375630 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="ad890ccba45d4c845dc515bcfedc4d4060e8c99cb10da916ae788e373d48865b" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375639 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="1cb7a2f57c5116dd63939c6513e77a00a69a8410f016136f8f3113c4eaefb685" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375645 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="cb6a17d1b963b37fe45bdd33c7dcb4010669d718352d5ad5ac97fb197646084b" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375652 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="f95d08291a9c199d391d69a68653c5c74f0e8e27762eb83acb0bc55e0d66b495" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375659 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="8fd88a76264762f5dddc732bba7fb82a2b2f0305de000da2121400725f7c2608" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375666 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="474104e42dc8f9996c3980ed13200c007cff2281dd29f3028ef1e3636cd57e4d" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375672 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="96326193929bfae4736118da0da7aa31ca82117388834ae8a9d814377a2df442" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375678 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="ea3655dc8836f21bb4e2d129275d9c13c513c97506d8486147669096ca156531" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375683 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="29693bd067c29bcef68d5b953a18c82bdc942785a44206abd862b321b7c77fb6" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375700 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="f47a12bee37ff4ad1416b9e4d6055c6ceddc1d33304721e7f0bcb0d682158067" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375738 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"7ca1893c15edffa73e896d53f7933f0f6d17a150336d46d4a2fa86bcaf2cfdab"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375764 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"0e9a70d0d3e573c08f1f05fea5e81d31b394892a0871babe4ea8df23e129049a"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375774 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"f3c885696e6da2f898a5b8c49181385ab9235ba5ae191d51cf760e67509c466e"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375782 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"ad890ccba45d4c845dc515bcfedc4d4060e8c99cb10da916ae788e373d48865b"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375790 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"1cb7a2f57c5116dd63939c6513e77a00a69a8410f016136f8f3113c4eaefb685"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"cb6a17d1b963b37fe45bdd33c7dcb4010669d718352d5ad5ac97fb197646084b"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375809 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"f95d08291a9c199d391d69a68653c5c74f0e8e27762eb83acb0bc55e0d66b495"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375817 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"8fd88a76264762f5dddc732bba7fb82a2b2f0305de000da2121400725f7c2608"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"474104e42dc8f9996c3980ed13200c007cff2281dd29f3028ef1e3636cd57e4d"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375836 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"96326193929bfae4736118da0da7aa31ca82117388834ae8a9d814377a2df442"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375845 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"ea3655dc8836f21bb4e2d129275d9c13c513c97506d8486147669096ca156531"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375852 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"29693bd067c29bcef68d5b953a18c82bdc942785a44206abd862b321b7c77fb6"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.375863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"f47a12bee37ff4ad1416b9e4d6055c6ceddc1d33304721e7f0bcb0d682158067"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.383101 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c9a919-e32a-49cd-b097-79fe61d7ec64" (UID: "95c9a919-e32a-49cd-b097-79fe61d7ec64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.386849 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7e7c01a6-4ad4-4b01-ab23-176e12337f1d" (UID: "7e7c01a6-4ad4-4b01-ab23-176e12337f1d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387265 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-combined-ca-bundle\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387351 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnh6q\" (UniqueName: \"kubernetes.io/projected/14f53619-6432-463f-b0df-5998c1f3298d-kube-api-access-qnh6q\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387426 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-config\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387492 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-metrics-certs-tls-certs\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387521 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-scripts\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387852 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.387906 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-ovsdbserver-sb-tls-certs\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.388294 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14f53619-6432-463f-b0df-5998c1f3298d-ovsdb-rundir\") pod \"14f53619-6432-463f-b0df-5998c1f3298d\" (UID: \"14f53619-6432-463f-b0df-5998c1f3298d\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.388487 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-config" (OuterVolumeSpecName: "config") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.388619 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-scripts" (OuterVolumeSpecName: "scripts") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389350 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14f53619-6432-463f-b0df-5998c1f3298d-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389522 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/14f53619-6432-463f-b0df-5998c1f3298d-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389538 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389552 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e7c01a6-4ad4-4b01-ab23-176e12337f1d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389567 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389578 4835 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389589 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f53619-6432-463f-b0df-5998c1f3298d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.389603 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bthrm\" (UniqueName: \"kubernetes.io/projected/95c9a919-e32a-49cd-b097-79fe61d7ec64-kube-api-access-bthrm\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.393716 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f53619-6432-463f-b0df-5998c1f3298d-kube-api-access-qnh6q" (OuterVolumeSpecName: "kube-api-access-qnh6q") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "kube-api-access-qnh6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.413221 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.424889 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6ceda64f-39c5-487e-9eb2-7c858e7d4ac3/ovsdbserver-nb/0.log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.425060 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.426083 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6ceda64f-39c5-487e-9eb2-7c858e7d4ac3","Type":"ContainerDied","Data":"9851194a389e66e60a9172c26c493ccffbd1c408a433b12d0ba975816c3b178e"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.426158 4835 scope.go:117] "RemoveContainer" containerID="22e7fb3c4f9ff91459d857bf70ed22149138014d698e91d8efaeea0dac7b2a71" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.438107 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" event={"ID":"7b92181d-1298-4d3c-93d8-92ee9abb2e1b","Type":"ContainerDied","Data":"7d1d2c1390098635fcb33a3026efdfe2342c5ef29bac89744b4036718965f252"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.438527 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-867cd545c7-74wd6" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.451013 4835 generic.go:334] "Generic (PLEG): container finished" podID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerID="abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.451104 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" event={"ID":"c6c94dad-4a93-4c81-b197-781087dc0a8c","Type":"ContainerDied","Data":"abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.453045 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1390-account-create-update-w4tjz" event={"ID":"28feb8dd-1f29-4594-a8fd-964758136d30","Type":"ContainerStarted","Data":"97a6b5d9c19a9b579c0d62b72f1c9f9cc7f04b4958c9d8324e5a4834f068ab3b"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.461909 4835 generic.go:334] "Generic (PLEG): container finished" podID="a845528c-90ab-4f61-ac5e-95f275035efa" containerID="570734336d9d7b25a8857e635d8a9706c455de491fa31af1b769d6d6503b8994" exitCode=0 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.462015 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a845528c-90ab-4f61-ac5e-95f275035efa","Type":"ContainerDied","Data":"570734336d9d7b25a8857e635d8a9706c455de491fa31af1b769d6d6503b8994"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.476546 4835 generic.go:334] "Generic (PLEG): container finished" podID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerID="34a15120c83268cc3829764d188bfd5b463d82edd23441f010b64a781fa805fd" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.476661 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141","Type":"ContainerDied","Data":"34a15120c83268cc3829764d188bfd5b463d82edd23441f010b64a781fa805fd"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.494645 4835 generic.go:334] "Generic (PLEG): container finished" podID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerID="d537da9c327d41d1628f1e29eec86d0b3bf610d51630148342dc5dae69ebf200" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.494764 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerDied","Data":"d537da9c327d41d1628f1e29eec86d0b3bf610d51630148342dc5dae69ebf200"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.516373 4835 scope.go:117] "RemoveContainer" containerID="63043a6f4d21769d8f7557cbfec090a12b03d583fd6e4c099c8f9066b89024d1" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.527110 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.527197 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data podName:372d1dd7-8b24-4652-b6bb-bce75af50e4e nodeName:}" failed. No retries permitted until 2026-01-29 16:02:36.527178928 +0000 UTC m=+2058.720222582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data") pod "rabbitmq-cell1-server-0" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.528178 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnh6q\" (UniqueName: \"kubernetes.io/projected/14f53619-6432-463f-b0df-5998c1f3298d-kube-api-access-qnh6q\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.539950 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.542210 4835 generic.go:334] "Generic (PLEG): container finished" podID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerID="fc453b30f0ab0c5cd310759e4ab38ea0a273f5573f846f4eff61f9a22f858b47" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.582635 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.595259 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_14f53619-6432-463f-b0df-5998c1f3298d/ovsdbserver-sb/0.log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.595438 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.597086 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "95c9a919-e32a-49cd-b097-79fe61d7ec64" (UID: "95c9a919-e32a-49cd-b097-79fe61d7ec64"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.599155 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fc1ac9-38d8-4a80-837c-1d4daa8edecd" path="/var/lib/kubelet/pods/07fc1ac9-38d8-4a80-837c-1d4daa8edecd/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.600776 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5cbfb3-cf93-4a36-ab1d-d64d65e68858" path="/var/lib/kubelet/pods/3b5cbfb3-cf93-4a36-ab1d-d64d65e68858/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.601454 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4635b73c-551a-4eb6-a2f0-344e968ea9a8" path="/var/lib/kubelet/pods/4635b73c-551a-4eb6-a2f0-344e968ea9a8/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.602100 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d2384d-74ba-4054-8760-33b5d1fa47ed" path="/var/lib/kubelet/pods/55d2384d-74ba-4054-8760-33b5d1fa47ed/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.603271 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fd86b9-312a-4054-a02c-6c27e4909035" path="/var/lib/kubelet/pods/61fd86b9-312a-4054-a02c-6c27e4909035/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.603931 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c9a919-e32a-49cd-b097-79fe61d7ec64" path="/var/lib/kubelet/pods/95c9a919-e32a-49cd-b097-79fe61d7ec64/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.604630 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7478452-6b29-4016-b823-43e509045fe4" path="/var/lib/kubelet/pods/a7478452-6b29-4016-b823-43e509045fe4/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.606057 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89" path="/var/lib/kubelet/pods/b2d5fe5b-c69a-4c4a-83e3-3e982a53dc89/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.607433 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d684c443-8def-40a8-b694-41ccaf6359b2" path="/var/lib/kubelet/pods/d684c443-8def-40a8-b694-41ccaf6359b2/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.608151 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597" path="/var/lib/kubelet/pods/f9f8e0f7-ce6b-4ffb-b0ee-f7f2654f1597/volumes" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.613619 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.643165 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.643195 4835 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/95c9a919-e32a-49cd-b097-79fe61d7ec64-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.643209 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.653081 4835 generic.go:334] "Generic (PLEG): container finished" podID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerID="c65cca305647c572c443a687b71194272f7144b26cee020f19edc85b55bf49d5" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.694800 4835 generic.go:334] "Generic (PLEG): container finished" podID="4af5e897-6640-4718-a326-3d241fc96840" containerID="0c1e86f6e8c0ad0573e87b6e82f94e3f2d9b9cd44533cbf25cafdc4662548921" exitCode=143 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.707795 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.707886 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gpfb4_7e7c01a6-4ad4-4b01-ab23-176e12337f1d/openstack-network-exporter/0.log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.708066 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gpfb4" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.730000 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "14f53619-6432-463f-b0df-5998c1f3298d" (UID: "14f53619-6432-463f-b0df-5998c1f3298d"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.735997 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.750454 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.750502 4835 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f53619-6432-463f-b0df-5998c1f3298d-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.778593 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7276d970-f119-4ab0-9e02-69bca14fc8db","Type":"ContainerDied","Data":"fc453b30f0ab0c5cd310759e4ab38ea0a273f5573f846f4eff61f9a22f858b47"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.779231 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"14f53619-6432-463f-b0df-5998c1f3298d","Type":"ContainerDied","Data":"31a071e73b191ab60c9380db51c7f6fd216968334d3e627698a9aeec9313860d"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.779336 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcd5f5d96-9c2ft" event={"ID":"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1","Type":"ContainerDied","Data":"c65cca305647c572c443a687b71194272f7144b26cee020f19edc85b55bf49d5"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.779419 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.779528 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.779622 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4af5e897-6640-4718-a326-3d241fc96840","Type":"ContainerDied","Data":"0c1e86f6e8c0ad0573e87b6e82f94e3f2d9b9cd44533cbf25cafdc4662548921"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.779712 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gpfb4" event={"ID":"7e7c01a6-4ad4-4b01-ab23-176e12337f1d","Type":"ContainerDied","Data":"49d64c451c1cf40361a87c5fb0b0a99f68b4077c9b1ecf4e081624244f0d1bfb"} Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.812842 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.816656 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="rabbitmq" containerID="cri-o://928d4065c80f7caf504725e2e34addde8c39c448d114f8f283af06dd5607f106" gracePeriod=604800 Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.889325 4835 scope.go:117] "RemoveContainer" containerID="a4bebb7a38805e14b3909ac1eed6f7c8efdae09d6f1fad23656e12533bd81e37" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.902100 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gpfb4"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.946278 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-gpfb4"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.954486 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data-custom\") pod \"921df059-21ec-4ba7-9093-bbcc7c788991\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.954557 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921df059-21ec-4ba7-9093-bbcc7c788991-logs\") pod \"921df059-21ec-4ba7-9093-bbcc7c788991\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.954676 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf6q5\" (UniqueName: \"kubernetes.io/projected/921df059-21ec-4ba7-9093-bbcc7c788991-kube-api-access-cf6q5\") pod \"921df059-21ec-4ba7-9093-bbcc7c788991\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.954736 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data\") pod \"921df059-21ec-4ba7-9093-bbcc7c788991\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.954775 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-combined-ca-bundle\") pod \"921df059-21ec-4ba7-9093-bbcc7c788991\" (UID: \"921df059-21ec-4ba7-9093-bbcc7c788991\") " Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.957380 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/921df059-21ec-4ba7-9093-bbcc7c788991-logs" (OuterVolumeSpecName: "logs") pod "921df059-21ec-4ba7-9093-bbcc7c788991" (UID: "921df059-21ec-4ba7-9093-bbcc7c788991"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.962902 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-867cd545c7-74wd6"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.963679 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921df059-21ec-4ba7-9093-bbcc7c788991-kube-api-access-cf6q5" (OuterVolumeSpecName: "kube-api-access-cf6q5") pod "921df059-21ec-4ba7-9093-bbcc7c788991" (UID: "921df059-21ec-4ba7-9093-bbcc7c788991"). InnerVolumeSpecName "kube-api-access-cf6q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.967691 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "921df059-21ec-4ba7-9093-bbcc7c788991" (UID: "921df059-21ec-4ba7-9093-bbcc7c788991"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.974438 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-867cd545c7-74wd6"] Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.979380 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wlrqj"] Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.982601 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker-log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.982769 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker-log" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.982865 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7c01a6-4ad4-4b01-ab23-176e12337f1d" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.982877 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7c01a6-4ad4-4b01-ab23-176e12337f1d" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.982894 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.982900 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.982941 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerName="dnsmasq-dns" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.982949 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerName="dnsmasq-dns" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.982964 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="ovsdbserver-nb" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.982972 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="ovsdbserver-nb" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.983013 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.983020 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.983036 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerName="init" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.983041 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerName="init" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.983050 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.983056 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: E0129 16:02:34.983065 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="ovsdbserver-sb" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.983088 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="ovsdbserver-sb" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984047 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker-log" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984070 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984090 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984100 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f53619-6432-463f-b0df-5998c1f3298d" containerName="ovsdbserver-sb" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984110 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" containerName="dnsmasq-dns" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984120 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7c01a6-4ad4-4b01-ab23-176e12337f1d" containerName="openstack-network-exporter" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984430 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" containerName="barbican-worker" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.984444 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" containerName="ovsdbserver-nb" Jan 29 16:02:34 crc kubenswrapper[4835]: I0129 16:02:34.987190 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.008024 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wlrqj"] Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.008972 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.018638 4835 scope.go:117] "RemoveContainer" containerID="62883f1aedbcfa6977700b6a988ff4f2b15c345d045d8729a65254788e47da6d" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.034681 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "921df059-21ec-4ba7-9093-bbcc7c788991" (UID: "921df059-21ec-4ba7-9093-bbcc7c788991"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.057220 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.058829 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsvgn\" (UniqueName: \"kubernetes.io/projected/97481534-5529-4faf-849e-3b9ce6f44330-kube-api-access-rsvgn\") pod \"root-account-create-update-wlrqj\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.058869 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts\") pod \"root-account-create-update-wlrqj\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.059095 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf6q5\" (UniqueName: \"kubernetes.io/projected/921df059-21ec-4ba7-9093-bbcc7c788991-kube-api-access-cf6q5\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.059111 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.059120 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.059128 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/921df059-21ec-4ba7-9093-bbcc7c788991-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: E0129 16:02:35.059183 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:02:35 crc kubenswrapper[4835]: E0129 16:02:35.059224 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data podName:427d05dd-22a3-4557-b9b3-9b5a39ea4662 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:39.059209725 +0000 UTC m=+2061.252253379 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data") pod "rabbitmq-server-0" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662") : configmap "rabbitmq-config-data" not found Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.064642 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data" (OuterVolumeSpecName: "config-data") pod "921df059-21ec-4ba7-9093-bbcc7c788991" (UID: "921df059-21ec-4ba7-9093-bbcc7c788991"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.064811 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.076959 4835 scope.go:117] "RemoveContainer" containerID="d76a6aafbc03ceed32e448d0b63896b2dbe40764d4837150ddc617c101e05379" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.085281 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.092215 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.108049 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.154976 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.161714 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwcfp\" (UniqueName: \"kubernetes.io/projected/1317f4ba-c71b-4744-a2d6-c6f9be366835-kube-api-access-kwcfp\") pod \"1317f4ba-c71b-4744-a2d6-c6f9be366835\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.161802 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387d017-5a11-4bf5-a9be-0895c6871f88-operator-scripts\") pod \"9387d017-5a11-4bf5-a9be-0895c6871f88\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.162111 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wckfm\" (UniqueName: \"kubernetes.io/projected/d40748c9-6347-493a-9819-6b9606830b20-kube-api-access-wckfm\") pod \"d40748c9-6347-493a-9819-6b9606830b20\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.162144 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40748c9-6347-493a-9819-6b9606830b20-operator-scripts\") pod \"d40748c9-6347-493a-9819-6b9606830b20\" (UID: \"d40748c9-6347-493a-9819-6b9606830b20\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.162255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1317f4ba-c71b-4744-a2d6-c6f9be366835-operator-scripts\") pod \"1317f4ba-c71b-4744-a2d6-c6f9be366835\" (UID: \"1317f4ba-c71b-4744-a2d6-c6f9be366835\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.162309 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzq9n\" (UniqueName: \"kubernetes.io/projected/9387d017-5a11-4bf5-a9be-0895c6871f88-kube-api-access-gzq9n\") pod \"9387d017-5a11-4bf5-a9be-0895c6871f88\" (UID: \"9387d017-5a11-4bf5-a9be-0895c6871f88\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.162874 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsvgn\" (UniqueName: \"kubernetes.io/projected/97481534-5529-4faf-849e-3b9ce6f44330-kube-api-access-rsvgn\") pod \"root-account-create-update-wlrqj\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.162918 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts\") pod \"root-account-create-update-wlrqj\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.163061 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/921df059-21ec-4ba7-9093-bbcc7c788991-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.163985 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts\") pod \"root-account-create-update-wlrqj\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.164369 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9387d017-5a11-4bf5-a9be-0895c6871f88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9387d017-5a11-4bf5-a9be-0895c6871f88" (UID: "9387d017-5a11-4bf5-a9be-0895c6871f88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.165267 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40748c9-6347-493a-9819-6b9606830b20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d40748c9-6347-493a-9819-6b9606830b20" (UID: "d40748c9-6347-493a-9819-6b9606830b20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.166103 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1317f4ba-c71b-4744-a2d6-c6f9be366835-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1317f4ba-c71b-4744-a2d6-c6f9be366835" (UID: "1317f4ba-c71b-4744-a2d6-c6f9be366835"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.166744 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1317f4ba-c71b-4744-a2d6-c6f9be366835-kube-api-access-kwcfp" (OuterVolumeSpecName: "kube-api-access-kwcfp") pod "1317f4ba-c71b-4744-a2d6-c6f9be366835" (UID: "1317f4ba-c71b-4744-a2d6-c6f9be366835"). InnerVolumeSpecName "kube-api-access-kwcfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.182956 4835 scope.go:117] "RemoveContainer" containerID="1c460abfe7dfc5d912ef11f69c1789679aced0cf5dd1d1c2a054756009ac4f3a" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.183008 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40748c9-6347-493a-9819-6b9606830b20-kube-api-access-wckfm" (OuterVolumeSpecName: "kube-api-access-wckfm") pod "d40748c9-6347-493a-9819-6b9606830b20" (UID: "d40748c9-6347-493a-9819-6b9606830b20"). InnerVolumeSpecName "kube-api-access-wckfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.194705 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9387d017-5a11-4bf5-a9be-0895c6871f88-kube-api-access-gzq9n" (OuterVolumeSpecName: "kube-api-access-gzq9n") pod "9387d017-5a11-4bf5-a9be-0895c6871f88" (UID: "9387d017-5a11-4bf5-a9be-0895c6871f88"). InnerVolumeSpecName "kube-api-access-gzq9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.202657 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsvgn\" (UniqueName: \"kubernetes.io/projected/97481534-5529-4faf-849e-3b9ce6f44330-kube-api-access-rsvgn\") pod \"root-account-create-update-wlrqj\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.220574 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.226235 4835 scope.go:117] "RemoveContainer" containerID="6bdfd393d80d8cc5d2c247ae44820ea1572128277c82ec3956cbbb2b6731b3c4" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.235331 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.263958 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-combined-ca-bundle\") pod \"c6c94dad-4a93-4c81-b197-781087dc0a8c\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264029 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-nova-novncproxy-tls-certs\") pod \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264078 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-combined-ca-bundle\") pod \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264163 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data-custom\") pod \"c6c94dad-4a93-4c81-b197-781087dc0a8c\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264205 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data\") pod \"c6c94dad-4a93-4c81-b197-781087dc0a8c\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264238 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c94dad-4a93-4c81-b197-781087dc0a8c-logs\") pod \"c6c94dad-4a93-4c81-b197-781087dc0a8c\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264292 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4557w\" (UniqueName: \"kubernetes.io/projected/c6c94dad-4a93-4c81-b197-781087dc0a8c-kube-api-access-4557w\") pod \"c6c94dad-4a93-4c81-b197-781087dc0a8c\" (UID: \"c6c94dad-4a93-4c81-b197-781087dc0a8c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264325 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-config-data\") pod \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264382 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk68z\" (UniqueName: \"kubernetes.io/projected/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-kube-api-access-gk68z\") pod \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.264408 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-vencrypt-tls-certs\") pod \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\" (UID: \"f082fcc3-a213-48ea-a0f0-359e93d9ce1c\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.265037 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9387d017-5a11-4bf5-a9be-0895c6871f88-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.265065 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wckfm\" (UniqueName: \"kubernetes.io/projected/d40748c9-6347-493a-9819-6b9606830b20-kube-api-access-wckfm\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.265078 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d40748c9-6347-493a-9819-6b9606830b20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.265089 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1317f4ba-c71b-4744-a2d6-c6f9be366835-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.265100 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzq9n\" (UniqueName: \"kubernetes.io/projected/9387d017-5a11-4bf5-a9be-0895c6871f88-kube-api-access-gzq9n\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.265111 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwcfp\" (UniqueName: \"kubernetes.io/projected/1317f4ba-c71b-4744-a2d6-c6f9be366835-kube-api-access-kwcfp\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.270072 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c94dad-4a93-4c81-b197-781087dc0a8c-logs" (OuterVolumeSpecName: "logs") pod "c6c94dad-4a93-4c81-b197-781087dc0a8c" (UID: "c6c94dad-4a93-4c81-b197-781087dc0a8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.281408 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c6c94dad-4a93-4c81-b197-781087dc0a8c" (UID: "c6c94dad-4a93-4c81-b197-781087dc0a8c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.285251 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c94dad-4a93-4c81-b197-781087dc0a8c-kube-api-access-4557w" (OuterVolumeSpecName: "kube-api-access-4557w") pod "c6c94dad-4a93-4c81-b197-781087dc0a8c" (UID: "c6c94dad-4a93-4c81-b197-781087dc0a8c"). InnerVolumeSpecName "kube-api-access-4557w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.295893 4835 scope.go:117] "RemoveContainer" containerID="ab287bc3d69fa692e9cd2585dfd495de5681f84ae38e21c4cac121117ad30151" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.309411 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-kube-api-access-gk68z" (OuterVolumeSpecName: "kube-api-access-gk68z") pod "f082fcc3-a213-48ea-a0f0-359e93d9ce1c" (UID: "f082fcc3-a213-48ea-a0f0-359e93d9ce1c"). InnerVolumeSpecName "kube-api-access-gk68z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.321392 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f082fcc3-a213-48ea-a0f0-359e93d9ce1c" (UID: "f082fcc3-a213-48ea-a0f0-359e93d9ce1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.323930 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-config-data" (OuterVolumeSpecName: "config-data") pod "f082fcc3-a213-48ea-a0f0-359e93d9ce1c" (UID: "f082fcc3-a213-48ea-a0f0-359e93d9ce1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.324793 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-646f7c7f9f-p9x92" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.189:9696/\": dial tcp 10.217.0.189:9696: connect: connection refused" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.330510 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c94dad-4a93-4c81-b197-781087dc0a8c" (UID: "c6c94dad-4a93-4c81-b197-781087dc0a8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.366152 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qshrt\" (UniqueName: \"kubernetes.io/projected/28feb8dd-1f29-4594-a8fd-964758136d30-kube-api-access-qshrt\") pod \"28feb8dd-1f29-4594-a8fd-964758136d30\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.366513 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28feb8dd-1f29-4594-a8fd-964758136d30-operator-scripts\") pod \"28feb8dd-1f29-4594-a8fd-964758136d30\" (UID: \"28feb8dd-1f29-4594-a8fd-964758136d30\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367034 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367051 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367063 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367075 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6c94dad-4a93-4c81-b197-781087dc0a8c-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367086 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4557w\" (UniqueName: \"kubernetes.io/projected/c6c94dad-4a93-4c81-b197-781087dc0a8c-kube-api-access-4557w\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367096 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367106 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk68z\" (UniqueName: \"kubernetes.io/projected/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-kube-api-access-gk68z\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367197 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "f082fcc3-a213-48ea-a0f0-359e93d9ce1c" (UID: "f082fcc3-a213-48ea-a0f0-359e93d9ce1c"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.367532 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28feb8dd-1f29-4594-a8fd-964758136d30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28feb8dd-1f29-4594-a8fd-964758136d30" (UID: "28feb8dd-1f29-4594-a8fd-964758136d30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.372782 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28feb8dd-1f29-4594-a8fd-964758136d30-kube-api-access-qshrt" (OuterVolumeSpecName: "kube-api-access-qshrt") pod "28feb8dd-1f29-4594-a8fd-964758136d30" (UID: "28feb8dd-1f29-4594-a8fd-964758136d30"). InnerVolumeSpecName "kube-api-access-qshrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.393480 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "f082fcc3-a213-48ea-a0f0-359e93d9ce1c" (UID: "f082fcc3-a213-48ea-a0f0-359e93d9ce1c"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.398615 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data" (OuterVolumeSpecName: "config-data") pod "c6c94dad-4a93-4c81-b197-781087dc0a8c" (UID: "c6c94dad-4a93-4c81-b197-781087dc0a8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.401683 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.402970 4835 scope.go:117] "RemoveContainer" containerID="540959f4551d052c3cb5a4e4f683efeed182efa67d63708676587082e1b90021" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.469497 4835 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.469540 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qshrt\" (UniqueName: \"kubernetes.io/projected/28feb8dd-1f29-4594-a8fd-964758136d30-kube-api-access-qshrt\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.469553 4835 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f082fcc3-a213-48ea-a0f0-359e93d9ce1c-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.469564 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6c94dad-4a93-4c81-b197-781087dc0a8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.469575 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28feb8dd-1f29-4594-a8fd-964758136d30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.483281 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6677554fc9-4vpbh"] Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.483601 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6677554fc9-4vpbh" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-httpd" containerID="cri-o://8916bd9b088f2a42117f9ccac869fd3b5628b655f7f6665b375b874a888bc7d9" gracePeriod=30 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.484123 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-6677554fc9-4vpbh" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-server" containerID="cri-o://d866f0be4bff596f0e208f6621bf6965e406522a230fe73cfee91a2420bda235" gracePeriod=30 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.532370 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:35 crc kubenswrapper[4835]: E0129 16:02:35.606933 4835 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 29 16:02:35 crc kubenswrapper[4835]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T16:02:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 16:02:35 crc kubenswrapper[4835]: /etc/init.d/functions: line 589: 673 Alarm clock "$@" Jan 29 16:02:35 crc kubenswrapper[4835]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-wvpc4" message=< Jan 29 16:02:35 crc kubenswrapper[4835]: Exiting ovn-controller (1) [FAILED] Jan 29 16:02:35 crc kubenswrapper[4835]: Killing ovn-controller (1) [ OK ] Jan 29 16:02:35 crc kubenswrapper[4835]: 2026-01-29T16:02:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 16:02:35 crc kubenswrapper[4835]: /etc/init.d/functions: line 589: 673 Alarm clock "$@" Jan 29 16:02:35 crc kubenswrapper[4835]: > Jan 29 16:02:35 crc kubenswrapper[4835]: E0129 16:02:35.606983 4835 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 29 16:02:35 crc kubenswrapper[4835]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-29T16:02:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 29 16:02:35 crc kubenswrapper[4835]: /etc/init.d/functions: line 589: 673 Alarm clock "$@" Jan 29 16:02:35 crc kubenswrapper[4835]: > pod="openstack/ovn-controller-wvpc4" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" containerID="cri-o://a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.607039 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-wvpc4" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" containerID="cri-o://a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840" gracePeriod=27 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.677166 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d51b909-aee8-4426-b868-d9ff54558a80-operator-scripts\") pod \"0d51b909-aee8-4426-b868-d9ff54558a80\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.677644 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27hrw\" (UniqueName: \"kubernetes.io/projected/0d51b909-aee8-4426-b868-d9ff54558a80-kube-api-access-27hrw\") pod \"0d51b909-aee8-4426-b868-d9ff54558a80\" (UID: \"0d51b909-aee8-4426-b868-d9ff54558a80\") " Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.680026 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d51b909-aee8-4426-b868-d9ff54558a80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d51b909-aee8-4426-b868-d9ff54558a80" (UID: "0d51b909-aee8-4426-b868-d9ff54558a80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.708238 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d51b909-aee8-4426-b868-d9ff54558a80-kube-api-access-27hrw" (OuterVolumeSpecName: "kube-api-access-27hrw") pod "0d51b909-aee8-4426-b868-d9ff54558a80" (UID: "0d51b909-aee8-4426-b868-d9ff54558a80"). InnerVolumeSpecName "kube-api-access-27hrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.777501 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" event={"ID":"921df059-21ec-4ba7-9093-bbcc7c788991","Type":"ContainerDied","Data":"2b37da97af7e488828e1712e1c4350ac8f492f01b7963e69f9bfbeb99cc158fe"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.777540 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f7ddf4db5-rxpvj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.777565 4835 scope.go:117] "RemoveContainer" containerID="1d1e2e8d5b3cc6c604736856ef650f21234223173aea5d23d04e47451a48e118" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.779223 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27hrw\" (UniqueName: \"kubernetes.io/projected/0d51b909-aee8-4426-b868-d9ff54558a80-kube-api-access-27hrw\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.779245 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d51b909-aee8-4426-b868-d9ff54558a80-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.792150 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q82d6" event={"ID":"1317f4ba-c71b-4744-a2d6-c6f9be366835","Type":"ContainerDied","Data":"6637c3304993ebe058998c4fef33b26a61272c65146293b0d7e1c38e42822b53"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.792224 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q82d6" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.800055 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.170:8776/healthcheck\": read tcp 10.217.0.2:33210->10.217.0.170:8776: read: connection reset by peer" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.806990 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7580-account-create-update-jw5tj" event={"ID":"d40748c9-6347-493a-9819-6b9606830b20","Type":"ContainerDied","Data":"f763f6cd6c4fa9680c9979eb7e5f890ed46ff6b7d8792cbfbfcb2f2252a7d031"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.807102 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7580-account-create-update-jw5tj" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.812515 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1390-account-create-update-w4tjz" event={"ID":"28feb8dd-1f29-4594-a8fd-964758136d30","Type":"ContainerDied","Data":"97a6b5d9c19a9b579c0d62b72f1c9f9cc7f04b4958c9d8324e5a4834f068ab3b"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.812624 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1390-account-create-update-w4tjz" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.818703 4835 generic.go:334] "Generic (PLEG): container finished" podID="f082fcc3-a213-48ea-a0f0-359e93d9ce1c" containerID="2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f" exitCode=0 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.818768 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f082fcc3-a213-48ea-a0f0-359e93d9ce1c","Type":"ContainerDied","Data":"2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.818808 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f082fcc3-a213-48ea-a0f0-359e93d9ce1c","Type":"ContainerDied","Data":"15a08c9386b462d445956ed113b63396be4bd10809d2f7324701f1e19fc548f5"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.818870 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.825858 4835 generic.go:334] "Generic (PLEG): container finished" podID="a845528c-90ab-4f61-ac5e-95f275035efa" containerID="a28a5cec9281d7421cb36e3f7d37a107045c2b59a77ddfcbfcbee35f116de902" exitCode=0 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.825977 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a845528c-90ab-4f61-ac5e-95f275035efa","Type":"ContainerDied","Data":"a28a5cec9281d7421cb36e3f7d37a107045c2b59a77ddfcbfcbee35f116de902"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.828067 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-919a-account-create-update-thldb" event={"ID":"9387d017-5a11-4bf5-a9be-0895c6871f88","Type":"ContainerDied","Data":"7aa07e38f636814516b6dfca3334d0682216e67169f7da206d1b2ce718d8f4e8"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.828091 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-919a-account-create-update-thldb" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.899205 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="7d90e6ba25c8c94653671eba222fef93b9b2446541dcca56e78ee9231c96445a" exitCode=0 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.899293 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"7d90e6ba25c8c94653671eba222fef93b9b2446541dcca56e78ee9231c96445a"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.915049 4835 scope.go:117] "RemoveContainer" containerID="4038d593222cca32b6f2d81c1b1addde266dc5987b9461c587363594ca454f06" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.916272 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e45-account-create-update-fkcrv" event={"ID":"0d51b909-aee8-4426-b868-d9ff54558a80","Type":"ContainerDied","Data":"3c70ca2b96b930841558289b6b216396f497d96256957b55dcf21a34e76ca758"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.916359 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e45-account-create-update-fkcrv" Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.927456 4835 generic.go:334] "Generic (PLEG): container finished" podID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerID="cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28" exitCode=0 Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.927512 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" event={"ID":"c6c94dad-4a93-4c81-b197-781087dc0a8c","Type":"ContainerDied","Data":"cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.927543 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" event={"ID":"c6c94dad-4a93-4c81-b197-781087dc0a8c","Type":"ContainerDied","Data":"80c072e7b310027788dce8b0ec0008f139036bbb87a82bf19323cbbb69677262"} Jan 29 16:02:35 crc kubenswrapper[4835]: I0129 16:02:35.929132 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-748568c9b6-g5d2z" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.100830 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wlrqj"] Jan 29 16:02:36 crc kubenswrapper[4835]: W0129 16:02:36.127789 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97481534_5529_4faf_849e_3b9ce6f44330.slice/crio-4261480e78efd75178ad1bf323ae9cd13c89b4efd624346fdb742505ec1a3cc1 WatchSource:0}: Error finding container 4261480e78efd75178ad1bf323ae9cd13c89b4efd624346fdb742505ec1a3cc1: Status 404 returned error can't find the container with id 4261480e78efd75178ad1bf323ae9cd13c89b4efd624346fdb742505ec1a3cc1 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.184663 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5f7ddf4db5-rxpvj"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.201578 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5f7ddf4db5-rxpvj"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.211968 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.215651 4835 scope.go:117] "RemoveContainer" containerID="2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.238039 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-748568c9b6-g5d2z"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.248355 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-748568c9b6-g5d2z"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.261860 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-919a-account-create-update-thldb"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.271823 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-919a-account-create-update-thldb"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.291585 4835 scope.go:117] "RemoveContainer" containerID="2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f" Jan 29 16:02:36 crc kubenswrapper[4835]: E0129 16:02:36.293927 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f\": container with ID starting with 2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f not found: ID does not exist" containerID="2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.294005 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f"} err="failed to get container status \"2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f\": rpc error: code = NotFound desc = could not find container \"2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f\": container with ID starting with 2be0ea2bc62c948edba5c066cdefbf3eaf55a908000104c066d03060b204e21f not found: ID does not exist" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.294040 4835 scope.go:117] "RemoveContainer" containerID="cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.296987 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lssh5\" (UniqueName: \"kubernetes.io/projected/a845528c-90ab-4f61-ac5e-95f275035efa-kube-api-access-lssh5\") pod \"a845528c-90ab-4f61-ac5e-95f275035efa\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.297092 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a845528c-90ab-4f61-ac5e-95f275035efa-etc-machine-id\") pod \"a845528c-90ab-4f61-ac5e-95f275035efa\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.297168 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-combined-ca-bundle\") pod \"a845528c-90ab-4f61-ac5e-95f275035efa\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.297252 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-scripts\") pod \"a845528c-90ab-4f61-ac5e-95f275035efa\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.297298 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data\") pod \"a845528c-90ab-4f61-ac5e-95f275035efa\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.297374 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data-custom\") pod \"a845528c-90ab-4f61-ac5e-95f275035efa\" (UID: \"a845528c-90ab-4f61-ac5e-95f275035efa\") " Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.301396 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a845528c-90ab-4f61-ac5e-95f275035efa-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a845528c-90ab-4f61-ac5e-95f275035efa" (UID: "a845528c-90ab-4f61-ac5e-95f275035efa"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.318459 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a845528c-90ab-4f61-ac5e-95f275035efa" (UID: "a845528c-90ab-4f61-ac5e-95f275035efa"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.318706 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-scripts" (OuterVolumeSpecName: "scripts") pod "a845528c-90ab-4f61-ac5e-95f275035efa" (UID: "a845528c-90ab-4f61-ac5e-95f275035efa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.333030 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a845528c-90ab-4f61-ac5e-95f275035efa-kube-api-access-lssh5" (OuterVolumeSpecName: "kube-api-access-lssh5") pod "a845528c-90ab-4f61-ac5e-95f275035efa" (UID: "a845528c-90ab-4f61-ac5e-95f275035efa"). InnerVolumeSpecName "kube-api-access-lssh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.378440 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q82d6"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.402586 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lssh5\" (UniqueName: \"kubernetes.io/projected/a845528c-90ab-4f61-ac5e-95f275035efa-kube-api-access-lssh5\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.402611 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a845528c-90ab-4f61-ac5e-95f275035efa-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.402619 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.402627 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.411233 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q82d6"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.415697 4835 scope.go:117] "RemoveContainer" containerID="abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.437523 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1390-account-create-update-w4tjz"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.484192 4835 scope.go:117] "RemoveContainer" containerID="cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28" Jan 29 16:02:36 crc kubenswrapper[4835]: E0129 16:02:36.493648 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28\": container with ID starting with cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28 not found: ID does not exist" containerID="cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.493698 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28"} err="failed to get container status \"cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28\": rpc error: code = NotFound desc = could not find container \"cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28\": container with ID starting with cf0aec4801c295df92530b10f467a5886803ec73500907585f0b1dd19c29db28 not found: ID does not exist" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.493725 4835 scope.go:117] "RemoveContainer" containerID="abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.493824 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1390-account-create-update-w4tjz"] Jan 29 16:02:36 crc kubenswrapper[4835]: E0129 16:02:36.510796 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb\": container with ID starting with abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb not found: ID does not exist" containerID="abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.510839 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb"} err="failed to get container status \"abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb\": rpc error: code = NotFound desc = could not find container \"abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb\": container with ID starting with abfafdc14d12e649e7ef8cf2386ceb5189fc59ea8a36930df9864ae756c589cb not found: ID does not exist" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.540666 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1317f4ba-c71b-4744-a2d6-c6f9be366835" path="/var/lib/kubelet/pods/1317f4ba-c71b-4744-a2d6-c6f9be366835/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.541363 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f53619-6432-463f-b0df-5998c1f3298d" path="/var/lib/kubelet/pods/14f53619-6432-463f-b0df-5998c1f3298d/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.546843 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a845528c-90ab-4f61-ac5e-95f275035efa" (UID: "a845528c-90ab-4f61-ac5e-95f275035efa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.565444 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28feb8dd-1f29-4594-a8fd-964758136d30" path="/var/lib/kubelet/pods/28feb8dd-1f29-4594-a8fd-964758136d30/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.565920 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceda64f-39c5-487e-9eb2-7c858e7d4ac3" path="/var/lib/kubelet/pods/6ceda64f-39c5-487e-9eb2-7c858e7d4ac3/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.566441 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b92181d-1298-4d3c-93d8-92ee9abb2e1b" path="/var/lib/kubelet/pods/7b92181d-1298-4d3c-93d8-92ee9abb2e1b/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.567418 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7c01a6-4ad4-4b01-ab23-176e12337f1d" path="/var/lib/kubelet/pods/7e7c01a6-4ad4-4b01-ab23-176e12337f1d/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.568058 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921df059-21ec-4ba7-9093-bbcc7c788991" path="/var/lib/kubelet/pods/921df059-21ec-4ba7-9093-bbcc7c788991/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.576675 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9387d017-5a11-4bf5-a9be-0895c6871f88" path="/var/lib/kubelet/pods/9387d017-5a11-4bf5-a9be-0895c6871f88/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.577307 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" path="/var/lib/kubelet/pods/c6c94dad-4a93-4c81-b197-781087dc0a8c/volumes" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.613635 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:36 crc kubenswrapper[4835]: E0129 16:02:36.613713 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:36 crc kubenswrapper[4835]: E0129 16:02:36.613758 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data podName:372d1dd7-8b24-4652-b6bb-bce75af50e4e nodeName:}" failed. No retries permitted until 2026-01-29 16:02:40.613744747 +0000 UTC m=+2062.806788401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data") pod "rabbitmq-cell1-server-0" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.667336 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data" (OuterVolumeSpecName: "config-data") pod "a845528c-90ab-4f61-ac5e-95f275035efa" (UID: "a845528c-90ab-4f61-ac5e-95f275035efa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.683652 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:48160->10.217.0.160:9311: read: connection reset by peer" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.683674 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-564bdc756-f7cnh" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:48148->10.217.0.160:9311: read: connection reset by peer" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.715092 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a845528c-90ab-4f61-ac5e-95f275035efa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.761133 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7580-account-create-update-jw5tj"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.761168 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7580-account-create-update-jw5tj"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.761192 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5e45-account-create-update-fkcrv"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.761205 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5e45-account-create-update-fkcrv"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.761221 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.761234 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.908614 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wvpc4_6757e93d-f6ce-4dc5-8bc3-0de61411df89/ovn-controller/0.log" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.908705 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.940426 4835 generic.go:334] "Generic (PLEG): container finished" podID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerID="d866f0be4bff596f0e208f6621bf6965e406522a230fe73cfee91a2420bda235" exitCode=0 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.940458 4835 generic.go:334] "Generic (PLEG): container finished" podID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerID="8916bd9b088f2a42117f9ccac869fd3b5628b655f7f6665b375b874a888bc7d9" exitCode=0 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.940539 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6677554fc9-4vpbh" event={"ID":"97f1c686-0356-4885-b6f2-b5b80279f19f","Type":"ContainerDied","Data":"d866f0be4bff596f0e208f6621bf6965e406522a230fe73cfee91a2420bda235"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.940572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6677554fc9-4vpbh" event={"ID":"97f1c686-0356-4885-b6f2-b5b80279f19f","Type":"ContainerDied","Data":"8916bd9b088f2a42117f9ccac869fd3b5628b655f7f6665b375b874a888bc7d9"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.956360 4835 generic.go:334] "Generic (PLEG): container finished" podID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerID="5d95c6db7e0e718cab2c5db4b47a52f162d8fe1de2e34a697fb6ad3fbe3a9f22" exitCode=0 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.956441 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def","Type":"ContainerDied","Data":"5d95c6db7e0e718cab2c5db4b47a52f162d8fe1de2e34a697fb6ad3fbe3a9f22"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.960374 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a845528c-90ab-4f61-ac5e-95f275035efa","Type":"ContainerDied","Data":"f4e109ec1081fd2dff07ecc8d5fb3b233bbb0a66b55e7e8dbe3d25f609fd04c2"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.960442 4835 scope.go:117] "RemoveContainer" containerID="570734336d9d7b25a8857e635d8a9706c455de491fa31af1b769d6d6503b8994" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.960665 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.968753 4835 generic.go:334] "Generic (PLEG): container finished" podID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerID="1fa77cf0ee2adb7de043210724e572a07803b3db007e915a2b21ef69b093b4fb" exitCode=0 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.968865 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerDied","Data":"1fa77cf0ee2adb7de043210724e572a07803b3db007e915a2b21ef69b093b4fb"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.970640 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wvpc4_6757e93d-f6ce-4dc5-8bc3-0de61411df89/ovn-controller/0.log" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.970684 4835 generic.go:334] "Generic (PLEG): container finished" podID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerID="a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840" exitCode=143 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.970739 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4" event={"ID":"6757e93d-f6ce-4dc5-8bc3-0de61411df89","Type":"ContainerDied","Data":"a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.970763 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wvpc4" event={"ID":"6757e93d-f6ce-4dc5-8bc3-0de61411df89","Type":"ContainerDied","Data":"b991f76b6e3f9eafc20c65b87abce2a73767e594d3ccd39f2d4e964edada1f86"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.970841 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wvpc4" Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.974743 4835 generic.go:334] "Generic (PLEG): container finished" podID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerID="b0f9698f122586483286669c9c68b041121959ba078f0f206adad9e397fd176f" exitCode=0 Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.974829 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcd5f5d96-9c2ft" event={"ID":"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1","Type":"ContainerDied","Data":"b0f9698f122586483286669c9c68b041121959ba078f0f206adad9e397fd176f"} Jan 29 16:02:36 crc kubenswrapper[4835]: I0129 16:02:36.983574 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlrqj" event={"ID":"97481534-5529-4faf-849e-3b9ce6f44330","Type":"ContainerStarted","Data":"4261480e78efd75178ad1bf323ae9cd13c89b4efd624346fdb742505ec1a3cc1"} Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.005902 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.010626 4835 scope.go:117] "RemoveContainer" containerID="a28a5cec9281d7421cb36e3f7d37a107045c2b59a77ddfcbfcbee35f116de902" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.012004 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.023732 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-ovn-controller-tls-certs\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.023797 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run-ovn\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.023827 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.023973 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-combined-ca-bundle\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024063 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c29wv\" (UniqueName: \"kubernetes.io/projected/6757e93d-f6ce-4dc5-8bc3-0de61411df89-kube-api-access-c29wv\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024072 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024104 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-log-ovn\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024138 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6757e93d-f6ce-4dc5-8bc3-0de61411df89-scripts\") pod \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\" (UID: \"6757e93d-f6ce-4dc5-8bc3-0de61411df89\") " Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024387 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run" (OuterVolumeSpecName: "var-run") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024946 4835 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.024974 4835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.026051 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.026339 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6757e93d-f6ce-4dc5-8bc3-0de61411df89-scripts" (OuterVolumeSpecName: "scripts") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.032231 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6757e93d-f6ce-4dc5-8bc3-0de61411df89-kube-api-access-c29wv" (OuterVolumeSpecName: "kube-api-access-c29wv") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "kube-api-access-c29wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.045040 4835 scope.go:117] "RemoveContainer" containerID="a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.061702 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.061868 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.064883 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.076425 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.076577 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.083638 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.083701 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.083789 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.083844 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.091193 4835 scope.go:117] "RemoveContainer" containerID="a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.093698 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840\": container with ID starting with a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840 not found: ID does not exist" containerID="a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.093746 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840"} err="failed to get container status \"a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840\": rpc error: code = NotFound desc = could not find container \"a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840\": container with ID starting with a3d21c384e1506fb35130fe994c226d9a991015e1974cc8cd38203c2f8d08840 not found: ID does not exist" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.125691 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.125721 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c29wv\" (UniqueName: \"kubernetes.io/projected/6757e93d-f6ce-4dc5-8bc3-0de61411df89-kube-api-access-c29wv\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.125731 4835 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6757e93d-f6ce-4dc5-8bc3-0de61411df89-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.125740 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6757e93d-f6ce-4dc5-8bc3-0de61411df89-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.151789 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "6757e93d-f6ce-4dc5-8bc3-0de61411df89" (UID: "6757e93d-f6ce-4dc5-8bc3-0de61411df89"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.231515 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6757e93d-f6ce-4dc5-8bc3-0de61411df89-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.280143 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.280612 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-central-agent" containerID="cri-o://2eb7a757393f8bf70cc9374d3de67c78e1d018eb5513f80056f8cab73670e577" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.282589 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="proxy-httpd" containerID="cri-o://ee958556fc843d154af6291223910bcec2cc7e0ab15659ebccce1239613326c9" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.282589 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-notification-agent" containerID="cri-o://51cd8eb067ac6ab7e38ed9b480cf21bc4f6e8171ac2b87b3dc825503ca0a087c" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.282627 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="sg-core" containerID="cri-o://68f885bfbd51396b1e518a5b7261073d4617342910af6cce3149e027107f651e" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.329404 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.329684 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" containerName="kube-state-metrics" containerID="cri-o://0b587848172b5d3c1b88a5ab6e25578e5214132176aa1d24b7c6b21960df93b0" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.401575 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wvpc4"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.407187 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.415357 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.430439 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wvpc4"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.436713 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.436798 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerName="nova-cell0-conductor-conductor" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.532840 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.533082 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" containerName="memcached" containerID="cri-o://f13f382fce2fe07b9bdea9fc7a6cf5b29127c26d9f07750ff6f018c3213666cb" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.587801 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-815b-account-create-update-ffcjk"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.658362 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.674830 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-815b-account-create-update-ffcjk"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.675435 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": dial tcp 10.217.0.213:8775: connect: connection refused" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.675638 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": dial tcp 10.217.0.213:8775: connect: connection refused" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.688243 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.697511 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-815b-account-create-update-xqllj"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.697881 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.697892 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.697934 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener-log" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.697943 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener-log" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.697957 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.697963 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.697976 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="probe" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.697981 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="probe" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.698059 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="cinder-scheduler" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698067 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="cinder-scheduler" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.698086 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f082fcc3-a213-48ea-a0f0-359e93d9ce1c" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698092 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f082fcc3-a213-48ea-a0f0-359e93d9ce1c" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698310 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener-log" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698330 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" containerName="ovn-controller" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698339 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="cinder-scheduler" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698359 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" containerName="probe" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698372 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f082fcc3-a213-48ea-a0f0-359e93d9ce1c" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698382 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c94dad-4a93-4c81-b197-781087dc0a8c" containerName="barbican-keystone-listener" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.698989 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.701330 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.715951 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.728206 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-815b-account-create-update-xqllj"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.736996 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-92dgb"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.740716 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.740769 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" containerName="nova-scheduler-scheduler" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.785330 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zsxhl"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.812445 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zsxhl"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.820861 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-92dgb"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.826096 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5f9f4f755f-g8j9g"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.826370 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-5f9f4f755f-g8j9g" podUID="94d5afdb-a473-4923-a414-a7efb63d80cc" containerName="keystone-api" containerID="cri-o://0b0c35e1ffcc8a8104e5ff789a24cd4d76e701e7faa753d0ffc606c3ac6de52a" gracePeriod=30 Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.887480 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45fmv\" (UniqueName: \"kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv\") pod \"keystone-815b-account-create-update-xqllj\" (UID: \"1c0927a6-88b9-46cc-acda-30cad6fff9bd\") " pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.887937 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts\") pod \"keystone-815b-account-create-update-xqllj\" (UID: \"1c0927a6-88b9-46cc-acda-30cad6fff9bd\") " pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.894592 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cron-29495041-wrglx"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.902532 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cron-29495041-wrglx"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.909323 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-815b-account-create-update-xqllj"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.910564 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-45fmv operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-815b-account-create-update-xqllj" podUID="1c0927a6-88b9-46cc-acda-30cad6fff9bd" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.916442 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.930563 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-4dgc2"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.931786 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-4dgc2"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.955771 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.969771 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.972742 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wlrqj"] Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.979163 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.979843 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.980339 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.981206 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.981251 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerName="nova-cell1-conductor-conductor" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.989034 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts\") pod \"keystone-815b-account-create-update-xqllj\" (UID: \"1c0927a6-88b9-46cc-acda-30cad6fff9bd\") " pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:37 crc kubenswrapper[4835]: I0129 16:02:37.989105 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45fmv\" (UniqueName: \"kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv\") pod \"keystone-815b-account-create-update-xqllj\" (UID: \"1c0927a6-88b9-46cc-acda-30cad6fff9bd\") " pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.989428 4835 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.989501 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts podName:1c0927a6-88b9-46cc-acda-30cad6fff9bd nodeName:}" failed. No retries permitted until 2026-01-29 16:02:38.489485723 +0000 UTC m=+2060.682529377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts") pod "keystone-815b-account-create-update-xqllj" (UID: "1c0927a6-88b9-46cc-acda-30cad6fff9bd") : configmap "openstack-scripts" not found Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.996768 4835 projected.go:194] Error preparing data for projected volume kube-api-access-45fmv for pod openstack/keystone-815b-account-create-update-xqllj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:02:37 crc kubenswrapper[4835]: E0129 16:02:37.996828 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv podName:1c0927a6-88b9-46cc-acda-30cad6fff9bd nodeName:}" failed. No retries permitted until 2026-01-29 16:02:38.496811238 +0000 UTC m=+2060.689854892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-45fmv" (UniqueName: "kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv") pod "keystone-815b-account-create-update-xqllj" (UID: "1c0927a6-88b9-46cc-acda-30cad6fff9bd") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.061979 4835 generic.go:334] "Generic (PLEG): container finished" podID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerID="024c1b9e43847d2606d0aaf1470ab7b043f475eb17ce51f35fa0cf4e402923e1" exitCode=0 Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.062330 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141","Type":"ContainerDied","Data":"024c1b9e43847d2606d0aaf1470ab7b043f475eb17ce51f35fa0cf4e402923e1"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.064734 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bcd5f5d96-9c2ft" event={"ID":"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1","Type":"ContainerDied","Data":"6666f16609d0c8d032d2b4ae64b760a59f3a7b05c51eda076f38150208590c6a"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.064871 4835 scope.go:117] "RemoveContainer" containerID="b0f9698f122586483286669c9c68b041121959ba078f0f206adad9e397fd176f" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.064964 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bcd5f5d96-9c2ft" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.067415 4835 generic.go:334] "Generic (PLEG): container finished" podID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" containerID="0b587848172b5d3c1b88a5ab6e25578e5214132176aa1d24b7c6b21960df93b0" exitCode=2 Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.067494 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b578fda-bb52-4dce-8d76-5e8710aa08a8","Type":"ContainerDied","Data":"0b587848172b5d3c1b88a5ab6e25578e5214132176aa1d24b7c6b21960df93b0"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.080148 4835 generic.go:334] "Generic (PLEG): container finished" podID="97481534-5529-4faf-849e-3b9ce6f44330" containerID="c9818cba7c85de5bf5bfa7fb9d06ea721d490c0155226c0d8da66a729c176e41" exitCode=1 Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.080282 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlrqj" event={"ID":"97481534-5529-4faf-849e-3b9ce6f44330","Type":"ContainerDied","Data":"c9818cba7c85de5bf5bfa7fb9d06ea721d490c0155226c0d8da66a729c176e41"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.080634 4835 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-wlrqj" secret="" err="secret \"galera-openstack-dockercfg-mjkcp\" not found" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.081841 4835 scope.go:117] "RemoveContainer" containerID="c9818cba7c85de5bf5bfa7fb9d06ea721d490c0155226c0d8da66a729c176e41" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.088858 4835 generic.go:334] "Generic (PLEG): container finished" podID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerID="68f885bfbd51396b1e518a5b7261073d4617342910af6cce3149e027107f651e" exitCode=2 Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.088904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerDied","Data":"68f885bfbd51396b1e518a5b7261073d4617342910af6cce3149e027107f651e"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092022 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-internal-tls-certs\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092070 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-combined-ca-bundle\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092112 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-internal-tls-certs\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-scripts\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092189 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-public-tls-certs\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092216 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092242 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-scripts\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092267 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-combined-ca-bundle\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092284 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-logs\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092319 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-internal-tls-certs\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092336 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-public-tls-certs\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092367 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-logs\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092383 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-config-data\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092410 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgbgx\" (UniqueName: \"kubernetes.io/projected/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-kube-api-access-wgbgx\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092428 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-combined-ca-bundle\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tczpz\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-kube-api-access-tczpz\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092507 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-etc-machine-id\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092524 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data-custom\") pod \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\" (UID: \"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjb7b\" (UniqueName: \"kubernetes.io/projected/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-kube-api-access-qjb7b\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092588 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-log-httpd\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092624 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-config-data\") pod \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\" (UID: \"aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092643 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-run-httpd\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092675 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-public-tls-certs\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.092694 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-etc-swift\") pod \"97f1c686-0356-4885-b6f2-b5b80279f19f\" (UID: \"97f1c686-0356-4885-b6f2-b5b80279f19f\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.096264 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8f331dcd-ab0b-4cbc-86dc-5a5c00e08def","Type":"ContainerDied","Data":"e546f774a69c7bef220344ba61a7fb5ab11517a2f2fa79dd5619c93c427b5475"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.096546 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.098187 4835 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.098253 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts podName:97481534-5529-4faf-849e-3b9ce6f44330 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:38.59823448 +0000 UTC m=+2060.791278144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts") pod "root-account-create-update-wlrqj" (UID: "97481534-5529-4faf-849e-3b9ce6f44330") : configmap "openstack-scripts" not found Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.102254 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-logs" (OuterVolumeSpecName: "logs") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.105640 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-kube-api-access-tczpz" (OuterVolumeSpecName: "kube-api-access-tczpz") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "kube-api-access-tczpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.105696 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.107779 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-logs" (OuterVolumeSpecName: "logs") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.108392 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.108521 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.109343 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6677554fc9-4vpbh" event={"ID":"97f1c686-0356-4885-b6f2-b5b80279f19f","Type":"ContainerDied","Data":"c948dd315663194976e258a741d9371a7e553e8f95b9b76f2d15e4e409f4a1f7"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.109445 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6677554fc9-4vpbh" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.110950 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-kube-api-access-wgbgx" (OuterVolumeSpecName: "kube-api-access-wgbgx") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "kube-api-access-wgbgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.112091 4835 generic.go:334] "Generic (PLEG): container finished" podID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerID="d08761a4204863d086d26102c8b16c4d911a7f2432d7bf68d76fae131c0c304d" exitCode=0 Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.112710 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.112307 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7276d970-f119-4ab0-9e02-69bca14fc8db","Type":"ContainerDied","Data":"d08761a4204863d086d26102c8b16c4d911a7f2432d7bf68d76fae131c0c304d"} Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.112569 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.123755 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.124919 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-kube-api-access-qjb7b" (OuterVolumeSpecName: "kube-api-access-qjb7b") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "kube-api-access-qjb7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.132247 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.132251 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.132985 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-scripts" (OuterVolumeSpecName: "scripts") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.147115 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-scripts" (OuterVolumeSpecName: "scripts") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.166359 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.205:8081/readyz\": dial tcp 10.217.0.205:8081: connect: connection refused" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.172093 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.172213 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="galera" containerID="cri-o://9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" gracePeriod=30 Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201555 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201934 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201951 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201960 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201970 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201978 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201986 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgbgx\" (UniqueName: \"kubernetes.io/projected/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-kube-api-access-wgbgx\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.201996 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tczpz\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-kube-api-access-tczpz\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.202006 4835 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.202015 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.202023 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjb7b\" (UniqueName: \"kubernetes.io/projected/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-kube-api-access-qjb7b\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.202031 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.202039 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97f1c686-0356-4885-b6f2-b5b80279f19f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.202046 4835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/97f1c686-0356-4885-b6f2-b5b80279f19f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.220695 4835 scope.go:117] "RemoveContainer" containerID="c65cca305647c572c443a687b71194272f7144b26cee020f19edc85b55bf49d5" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.230283 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.266731 4835 scope.go:117] "RemoveContainer" containerID="5d95c6db7e0e718cab2c5db4b47a52f162d8fe1de2e34a697fb6ad3fbe3a9f22" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.275306 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.283623 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.303500 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-internal-tls-certs\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.303744 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-combined-ca-bundle\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.303998 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data-custom\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.311723 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm4jl\" (UniqueName: \"kubernetes.io/projected/83e68d7c-697f-41e0-94e8-99a198bf89ad-kube-api-access-cm4jl\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.312643 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-public-tls-certs\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.312889 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83e68d7c-697f-41e0-94e8-99a198bf89ad-logs\") pod \"83e68d7c-697f-41e0-94e8-99a198bf89ad\" (UID: \"83e68d7c-697f-41e0-94e8-99a198bf89ad\") " Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.314453 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.316378 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.316403 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.316414 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.316897 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83e68d7c-697f-41e0-94e8-99a198bf89ad-logs" (OuterVolumeSpecName: "logs") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.322631 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e68d7c-697f-41e0-94e8-99a198bf89ad-kube-api-access-cm4jl" (OuterVolumeSpecName: "kube-api-access-cm4jl") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "kube-api-access-cm4jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.338783 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.343367 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data" (OuterVolumeSpecName: "config-data") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.344027 4835 scope.go:117] "RemoveContainer" containerID="7d39271af2af9a293c7e1b987bdaaac49715c41571364d6fe8ad55a46391979e" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.345516 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.355908 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-config-data" (OuterVolumeSpecName: "config-data") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.377979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.387846 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-config-data" (OuterVolumeSpecName: "config-data") pod "97f1c686-0356-4885-b6f2-b5b80279f19f" (UID: "97f1c686-0356-4885-b6f2-b5b80279f19f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.390259 4835 scope.go:117] "RemoveContainer" containerID="d866f0be4bff596f0e208f6621bf6965e406522a230fe73cfee91a2420bda235" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.390286 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" (UID: "8f331dcd-ab0b-4cbc-86dc-5a5c00e08def"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.402768 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419041 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm4jl\" (UniqueName: \"kubernetes.io/projected/83e68d7c-697f-41e0-94e8-99a198bf89ad-kube-api-access-cm4jl\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419076 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419090 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419103 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83e68d7c-697f-41e0-94e8-99a198bf89ad-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419115 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419127 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419139 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419150 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f1c686-0356-4885-b6f2-b5b80279f19f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419160 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.419173 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.433848 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.437842 4835 scope.go:117] "RemoveContainer" containerID="8916bd9b088f2a42117f9ccac869fd3b5628b655f7f6665b375b874a888bc7d9" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.438881 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data" (OuterVolumeSpecName: "config-data") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.459899 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.475186 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.488553 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.492037 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-6677554fc9-4vpbh"] Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.512985 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d51b909-aee8-4426-b868-d9ff54558a80" path="/var/lib/kubelet/pods/0d51b909-aee8-4426-b868-d9ff54558a80/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.513356 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9472b2-4e26-42c0-8bf3-e3c21ba2f411" path="/var/lib/kubelet/pods/1e9472b2-4e26-42c0-8bf3-e3c21ba2f411/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.513865 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2975bccb-ca08-41b3-9086-9868a7f64e8b" path="/var/lib/kubelet/pods/2975bccb-ca08-41b3-9086-9868a7f64e8b/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.514376 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6757e93d-f6ce-4dc5-8bc3-0de61411df89" path="/var/lib/kubelet/pods/6757e93d-f6ce-4dc5-8bc3-0de61411df89/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.515463 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b83e2e-c96d-4376-ad4d-c3765ca6ca5c" path="/var/lib/kubelet/pods/77b83e2e-c96d-4376-ad4d-c3765ca6ca5c/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.515989 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e0d72e-954f-4460-95ff-46e5f602c3c5" path="/var/lib/kubelet/pods/84e0d72e-954f-4460-95ff-46e5f602c3c5/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.516507 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" path="/var/lib/kubelet/pods/8f331dcd-ab0b-4cbc-86dc-5a5c00e08def/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.517509 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a845528c-90ab-4f61-ac5e-95f275035efa" path="/var/lib/kubelet/pods/a845528c-90ab-4f61-ac5e-95f275035efa/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.518161 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40748c9-6347-493a-9819-6b9606830b20" path="/var/lib/kubelet/pods/d40748c9-6347-493a-9819-6b9606830b20/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.518512 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f082fcc3-a213-48ea-a0f0-359e93d9ce1c" path="/var/lib/kubelet/pods/f082fcc3-a213-48ea-a0f0-359e93d9ce1c/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.518992 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff472c74-7674-4064-a6f3-8a1cd21f8d9a" path="/var/lib/kubelet/pods/ff472c74-7674-4064-a6f3-8a1cd21f8d9a/volumes" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.520360 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts\") pod \"keystone-815b-account-create-update-xqllj\" (UID: \"1c0927a6-88b9-46cc-acda-30cad6fff9bd\") " pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.520511 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45fmv\" (UniqueName: \"kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv\") pod \"keystone-815b-account-create-update-xqllj\" (UID: \"1c0927a6-88b9-46cc-acda-30cad6fff9bd\") " pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.520697 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.520780 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.520835 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.521077 4835 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.521142 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts podName:1c0927a6-88b9-46cc-acda-30cad6fff9bd nodeName:}" failed. No retries permitted until 2026-01-29 16:02:39.521126191 +0000 UTC m=+2061.714169845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts") pod "keystone-815b-account-create-update-xqllj" (UID: "1c0927a6-88b9-46cc-acda-30cad6fff9bd") : configmap "openstack-scripts" not found Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.525144 4835 projected.go:194] Error preparing data for projected volume kube-api-access-45fmv for pod openstack/keystone-815b-account-create-update-xqllj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.525203 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv podName:1c0927a6-88b9-46cc-acda-30cad6fff9bd nodeName:}" failed. No retries permitted until 2026-01-29 16:02:39.525186173 +0000 UTC m=+2061.718229827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-45fmv" (UniqueName: "kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv") pod "keystone-815b-account-create-update-xqllj" (UID: "1c0927a6-88b9-46cc-acda-30cad6fff9bd") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.528793 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" (UID: "aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.546331 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "83e68d7c-697f-41e0-94e8-99a198bf89ad" (UID: "83e68d7c-697f-41e0-94e8-99a198bf89ad"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.629194 4835 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.629216 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.629256 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/83e68d7c-697f-41e0-94e8-99a198bf89ad-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:38 crc kubenswrapper[4835]: E0129 16:02:38.629277 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts podName:97481534-5529-4faf-849e-3b9ce6f44330 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:39.629257352 +0000 UTC m=+2061.822301006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts") pod "root-account-create-update-wlrqj" (UID: "97481534-5529-4faf-849e-3b9ce6f44330") : configmap "openstack-scripts" not found Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.885543 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-6677554fc9-4vpbh"] Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.913339 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-bcd5f5d96-9c2ft"] Jan 29 16:02:38 crc kubenswrapper[4835]: I0129 16:02:38.934911 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-bcd5f5d96-9c2ft"] Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.138182 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-564bdc756-f7cnh" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.138336 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-564bdc756-f7cnh" event={"ID":"83e68d7c-697f-41e0-94e8-99a198bf89ad","Type":"ContainerDied","Data":"14600d6710759fddf7a9e83ee289213ab1d2197385b5a1b0e2c5a007564a6d2e"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.140287 4835 scope.go:117] "RemoveContainer" containerID="d537da9c327d41d1628f1e29eec86d0b3bf610d51630148342dc5dae69ebf200" Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.140457 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.140597 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data podName:427d05dd-22a3-4557-b9b3-9b5a39ea4662 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:47.140546436 +0000 UTC m=+2069.333590090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data") pod "rabbitmq-server-0" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662") : configmap "rabbitmq-config-data" not found Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.152307 4835 generic.go:334] "Generic (PLEG): container finished" podID="4af5e897-6640-4718-a326-3d241fc96840" containerID="5b5c6de4e9f1b971bd029a229e3ed1243080ca7b0a00aae6a96f1b1534ec82d6" exitCode=0 Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.153741 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4af5e897-6640-4718-a326-3d241fc96840","Type":"ContainerDied","Data":"5b5c6de4e9f1b971bd029a229e3ed1243080ca7b0a00aae6a96f1b1534ec82d6"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.155189 4835 generic.go:334] "Generic (PLEG): container finished" podID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" exitCode=0 Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.155232 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f4896cd-93c6-4708-8f62-f94cedda3fc0","Type":"ContainerDied","Data":"33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.170002 4835 scope.go:117] "RemoveContainer" containerID="1fa77cf0ee2adb7de043210724e572a07803b3db007e915a2b21ef69b093b4fb" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.171451 4835 generic.go:334] "Generic (PLEG): container finished" podID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" containerID="1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473" exitCode=0 Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.171571 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2798812a-4bab-4c0e-a55d-f4c6232e67ae","Type":"ContainerDied","Data":"1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.175141 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-564bdc756-f7cnh"] Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.180812 4835 generic.go:334] "Generic (PLEG): container finished" podID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerID="ee958556fc843d154af6291223910bcec2cc7e0ab15659ebccce1239613326c9" exitCode=0 Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.180850 4835 generic.go:334] "Generic (PLEG): container finished" podID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerID="2eb7a757393f8bf70cc9374d3de67c78e1d018eb5513f80056f8cab73670e577" exitCode=0 Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.180925 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerDied","Data":"ee958556fc843d154af6291223910bcec2cc7e0ab15659ebccce1239613326c9"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.180986 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerDied","Data":"2eb7a757393f8bf70cc9374d3de67c78e1d018eb5513f80056f8cab73670e577"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.183199 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-564bdc756-f7cnh"] Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.186597 4835 generic.go:334] "Generic (PLEG): container finished" podID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerID="da115e8709897068c943f4fb977b6a69f1f71d0f9c94591378a80a37d0e541cb" exitCode=0 Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.186760 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-815b-account-create-update-xqllj" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.186873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2","Type":"ContainerDied","Data":"da115e8709897068c943f4fb977b6a69f1f71d0f9c94591378a80a37d0e541cb"} Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.244129 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-815b-account-create-update-xqllj"] Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.265054 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-815b-account-create-update-xqllj"] Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.279527 4835 scope.go:117] "RemoveContainer" containerID="ca7ed96849faaee25f26093745e02b1969733ec2783d12f9f2c28050a39785ae" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.345938 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45fmv\" (UniqueName: \"kubernetes.io/projected/1c0927a6-88b9-46cc-acda-30cad6fff9bd-kube-api-access-45fmv\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.345967 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c0927a6-88b9-46cc-acda-30cad6fff9bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.359217 4835 scope.go:117] "RemoveContainer" containerID="4d952b2ba38b92990ffb25b5cfacaf982a6c3071c60affccc48a62e10a9d7b68" Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.406716 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.408848 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.414507 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.414574 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="ovn-northd" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.455019 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.480884 4835 scope.go:117] "RemoveContainer" containerID="a4a8d1d6795a57017295b4d7231ffac10f0d8b6e3e695493171f62c381faf40f" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.620387 4835 scope.go:117] "RemoveContainer" containerID="17e650317719ce83a02693de52e93c1a1fc2141429b8b576788e7372d5c0ce1c" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.659695 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-httpd-run\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.659938 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-scripts\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.659985 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlmjp\" (UniqueName: \"kubernetes.io/projected/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-kube-api-access-wlmjp\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.660052 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.660284 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-logs\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.660328 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-config-data\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.660388 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-combined-ca-bundle\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.660425 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.660536 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-public-tls-certs\") pod \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\" (UID: \"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141\") " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.661311 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.661518 4835 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:02:39 crc kubenswrapper[4835]: E0129 16:02:39.661614 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts podName:97481534-5529-4faf-849e-3b9ce6f44330 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:41.661584485 +0000 UTC m=+2063.854628169 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts") pod "root-account-create-update-wlrqj" (UID: "97481534-5529-4faf-849e-3b9ce6f44330") : configmap "openstack-scripts" not found Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.665786 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-logs" (OuterVolumeSpecName: "logs") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.667497 4835 scope.go:117] "RemoveContainer" containerID="7eb18ea653f478a81bed97600ff9f912bbdefc0f8d27ac1b2ef97fd4acd10fe6" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.668066 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.669517 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-scripts" (OuterVolumeSpecName: "scripts") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.687627 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-kube-api-access-wlmjp" (OuterVolumeSpecName: "kube-api-access-wlmjp") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "kube-api-access-wlmjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.710081 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.731444 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.744141 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-config-data" (OuterVolumeSpecName: "config-data") pod "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" (UID: "e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.761136 4835 scope.go:117] "RemoveContainer" containerID="325712bfc8730f3f0f3a314d46167d7a286b943a428fc267faa019fc9f75760b" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762457 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762499 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlmjp\" (UniqueName: \"kubernetes.io/projected/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-kube-api-access-wlmjp\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762525 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762534 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762544 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762553 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.762562 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.787228 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.817818 4835 scope.go:117] "RemoveContainer" containerID="193333d858401d060c1e4aad514d17075124f018f66f8e855a868c9ff97dddf0" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.871921 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.880871 4835 scope.go:117] "RemoveContainer" containerID="d6cacb620c0e0a6f084b5de1844f2f96598061e8275a483841c42e1012c4362f" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.925676 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.951221 4835 scope.go:117] "RemoveContainer" containerID="b517a0ac1fdb51643a31b57c32c6da8fb261ae8d833ada91622687657049b87f" Jan 29 16:02:39 crc kubenswrapper[4835]: I0129 16:02:39.990047 4835 scope.go:117] "RemoveContainer" containerID="980166b426e17c4c47983854335d5b6f2473ef7574894cdff79499cf71d18b52" Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.040637 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.076498 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcwv4\" (UniqueName: \"kubernetes.io/projected/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-api-access-gcwv4\") pod \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.076568 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-combined-ca-bundle\") pod \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.076654 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-certs\") pod \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.076982 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7276d970-f119-4ab0-9e02-69bca14fc8db-logs\") pod \"7276d970-f119-4ab0-9e02-69bca14fc8db\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.077089 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-combined-ca-bundle\") pod \"7276d970-f119-4ab0-9e02-69bca14fc8db\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.077146 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-config\") pod \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\" (UID: \"6b578fda-bb52-4dce-8d76-5e8710aa08a8\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.077188 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcdg9\" (UniqueName: \"kubernetes.io/projected/7276d970-f119-4ab0-9e02-69bca14fc8db-kube-api-access-gcdg9\") pod \"7276d970-f119-4ab0-9e02-69bca14fc8db\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.077229 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-config-data\") pod \"7276d970-f119-4ab0-9e02-69bca14fc8db\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.090673 4835 scope.go:117] "RemoveContainer" containerID="4a49019f8701fc1b8ccdd0887728f4002467e51649b66e329951a91f49ff49fb" Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.091359 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7276d970-f119-4ab0-9e02-69bca14fc8db-logs" (OuterVolumeSpecName: "logs") pod "7276d970-f119-4ab0-9e02-69bca14fc8db" (UID: "7276d970-f119-4ab0-9e02-69bca14fc8db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.103583 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7276d970-f119-4ab0-9e02-69bca14fc8db-kube-api-access-gcdg9" (OuterVolumeSpecName: "kube-api-access-gcdg9") pod "7276d970-f119-4ab0-9e02-69bca14fc8db" (UID: "7276d970-f119-4ab0-9e02-69bca14fc8db"). InnerVolumeSpecName "kube-api-access-gcdg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:40 crc kubenswrapper[4835]: I0129 16:02:40.137729 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-api-access-gcwv4" (OuterVolumeSpecName: "kube-api-access-gcwv4") pod "6b578fda-bb52-4dce-8d76-5e8710aa08a8" (UID: "6b578fda-bb52-4dce-8d76-5e8710aa08a8"). InnerVolumeSpecName "kube-api-access-gcwv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.154967 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b578fda-bb52-4dce-8d76-5e8710aa08a8" (UID: "6b578fda-bb52-4dce-8d76-5e8710aa08a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:40.154945 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:40.157047 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.158923 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6b578fda-bb52-4dce-8d76-5e8710aa08a8" (UID: "6b578fda-bb52-4dce-8d76-5e8710aa08a8"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.159798 4835 scope.go:117] "RemoveContainer" containerID="860549624ab3eb45bfaf2a2e1971e8956ea637e261aa5d1e607638b4f557b735" Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:40.162682 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:40.162727 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="galera" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.165734 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-config-data" (OuterVolumeSpecName: "config-data") pod "7276d970-f119-4ab0-9e02-69bca14fc8db" (UID: "7276d970-f119-4ab0-9e02-69bca14fc8db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.178634 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-nova-metadata-tls-certs\") pod \"7276d970-f119-4ab0-9e02-69bca14fc8db\" (UID: \"7276d970-f119-4ab0-9e02-69bca14fc8db\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.179169 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.179184 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcwv4\" (UniqueName: \"kubernetes.io/projected/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-api-access-gcwv4\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.179195 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.179203 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7276d970-f119-4ab0-9e02-69bca14fc8db-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.179211 4835 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.179221 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcdg9\" (UniqueName: \"kubernetes.io/projected/7276d970-f119-4ab0-9e02-69bca14fc8db-kube-api-access-gcdg9\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.194547 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6b578fda-bb52-4dce-8d76-5e8710aa08a8" (UID: "6b578fda-bb52-4dce-8d76-5e8710aa08a8"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.195364 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7276d970-f119-4ab0-9e02-69bca14fc8db" (UID: "7276d970-f119-4ab0-9e02-69bca14fc8db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.201844 4835 scope.go:117] "RemoveContainer" containerID="7d8975a8c98e17294ce3724285a6446a95de9bde7d078b39bc89663bc23e5ba2" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.206860 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7276d970-f119-4ab0-9e02-69bca14fc8db","Type":"ContainerDied","Data":"ee423dce7bc34f6e58dadb4862b0ff8d5629af4294993b1754e8307b56ed0891"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.206929 4835 scope.go:117] "RemoveContainer" containerID="d08761a4204863d086d26102c8b16c4d911a7f2432d7bf68d76fae131c0c304d" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.207086 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.218304 4835 generic.go:334] "Generic (PLEG): container finished" podID="8d24538c-d668-497c-b4cd-4bff633b43fb" containerID="f13f382fce2fe07b9bdea9fc7a6cf5b29127c26d9f07750ff6f018c3213666cb" exitCode=0 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.218512 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8d24538c-d668-497c-b4cd-4bff633b43fb","Type":"ContainerDied","Data":"f13f382fce2fe07b9bdea9fc7a6cf5b29127c26d9f07750ff6f018c3213666cb"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.221554 4835 generic.go:334] "Generic (PLEG): container finished" podID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" exitCode=0 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.221611 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"36560cf0-b011-4fde-8cc0-00d63f2a141a","Type":"ContainerDied","Data":"3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.229046 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141","Type":"ContainerDied","Data":"3b5369c948091099a56fc09f3cf2ee575eb9964c818567bc0207e00ae3fb2c48"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.229148 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.243632 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7276d970-f119-4ab0-9e02-69bca14fc8db" (UID: "7276d970-f119-4ab0-9e02-69bca14fc8db"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.253254 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b578fda-bb52-4dce-8d76-5e8710aa08a8","Type":"ContainerDied","Data":"55bef02e0bdb3da607f6297ccbc16cdfdfdd946590e9eb66d4a3d51faf49d271"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.253366 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.293205 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.305252 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.305285 4835 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7276d970-f119-4ab0-9e02-69bca14fc8db-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.305301 4835 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b578fda-bb52-4dce-8d76-5e8710aa08a8-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.308518 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.318298 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.327243 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.339750 4835 scope.go:117] "RemoveContainer" containerID="5b824f835f44b904172c32ef10bf8ae6308d9fe5e2cf55462bad7bab8275707e" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.348869 4835 scope.go:117] "RemoveContainer" containerID="fc453b30f0ab0c5cd310759e4ab38ea0a273f5573f846f4eff61f9a22f858b47" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.375708 4835 scope.go:117] "RemoveContainer" containerID="ebe0c4565fdfee27e2560d8636f68ace9e647fb92abff69869e72b260eefab0c" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.401750 4835 scope.go:117] "RemoveContainer" containerID="024c1b9e43847d2606d0aaf1470ab7b043f475eb17ce51f35fa0cf4e402923e1" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.430487 4835 scope.go:117] "RemoveContainer" containerID="b3db866f5c5c10811d36b1ba316f1d95224827373d3c1fd25a1d9ebbccd563cf" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.448075 4835 scope.go:117] "RemoveContainer" containerID="34a15120c83268cc3829764d188bfd5b463d82edd23441f010b64a781fa805fd" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.481302 4835 scope.go:117] "RemoveContainer" containerID="aa6eae13e0634e1a486531e39af03d0f843ce795cea3a7a057c2290823b9b226" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.489384 4835 scope.go:117] "RemoveContainer" containerID="0b587848172b5d3c1b88a5ab6e25578e5214132176aa1d24b7c6b21960df93b0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.512573 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c0927a6-88b9-46cc-acda-30cad6fff9bd" path="/var/lib/kubelet/pods/1c0927a6-88b9-46cc-acda-30cad6fff9bd/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.512902 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" path="/var/lib/kubelet/pods/6b578fda-bb52-4dce-8d76-5e8710aa08a8/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.513424 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" path="/var/lib/kubelet/pods/83e68d7c-697f-41e0-94e8-99a198bf89ad/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.514048 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" path="/var/lib/kubelet/pods/97f1c686-0356-4885-b6f2-b5b80279f19f/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.515190 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" path="/var/lib/kubelet/pods/aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.515915 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" path="/var/lib/kubelet/pods/e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.560501 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.566134 4835 scope.go:117] "RemoveContainer" containerID="6b7329a1e06a9569e545697f0b7b3b8715cccd53175de5de3306fab6c063fed9" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.566792 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.709615 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:40.719192 4835 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:40.719300 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data podName:372d1dd7-8b24-4652-b6bb-bce75af50e4e nodeName:}" failed. No retries permitted until 2026-01-29 16:02:48.719272418 +0000 UTC m=+2070.912316072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data") pod "rabbitmq-cell1-server-0" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e") : configmap "rabbitmq-cell1-config-data" not found Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.720037 4835 scope.go:117] "RemoveContainer" containerID="8eacf4bca6f1fc794f6ecde66779587ce925e62e92a694fa2830be73105d9356" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.820945 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-combined-ca-bundle\") pod \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.820990 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-config-data\") pod \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.821029 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv99w\" (UniqueName: \"kubernetes.io/projected/2798812a-4bab-4c0e-a55d-f4c6232e67ae-kube-api-access-qv99w\") pod \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\" (UID: \"2798812a-4bab-4c0e-a55d-f4c6232e67ae\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.825979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2798812a-4bab-4c0e-a55d-f4c6232e67ae-kube-api-access-qv99w" (OuterVolumeSpecName: "kube-api-access-qv99w") pod "2798812a-4bab-4c0e-a55d-f4c6232e67ae" (UID: "2798812a-4bab-4c0e-a55d-f4c6232e67ae"). InnerVolumeSpecName "kube-api-access-qv99w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.846629 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-config-data" (OuterVolumeSpecName: "config-data") pod "2798812a-4bab-4c0e-a55d-f4c6232e67ae" (UID: "2798812a-4bab-4c0e-a55d-f4c6232e67ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.847449 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2798812a-4bab-4c0e-a55d-f4c6232e67ae" (UID: "2798812a-4bab-4c0e-a55d-f4c6232e67ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.922961 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.922988 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2798812a-4bab-4c0e-a55d-f4c6232e67ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:40.922999 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv99w\" (UniqueName: \"kubernetes.io/projected/2798812a-4bab-4c0e-a55d-f4c6232e67ae-kube-api-access-qv99w\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.280127 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2798812a-4bab-4c0e-a55d-f4c6232e67ae","Type":"ContainerDied","Data":"dd3a60df4ea09fe6a48863e00566846dd614aca7243e8a2c66399200f64e353b"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.280176 4835 scope.go:117] "RemoveContainer" containerID="1acf309d572e059e758b6f475ebeea425fd7bd964c301a0730c7478f986b3473" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.280294 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.295141 4835 generic.go:334] "Generic (PLEG): container finished" podID="94d5afdb-a473-4923-a414-a7efb63d80cc" containerID="0b0c35e1ffcc8a8104e5ff789a24cd4d76e701e7faa753d0ffc606c3ac6de52a" exitCode=0 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.295205 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f9f4f755f-g8j9g" event={"ID":"94d5afdb-a473-4923-a414-a7efb63d80cc","Type":"ContainerDied","Data":"0b0c35e1ffcc8a8104e5ff789a24cd4d76e701e7faa753d0ffc606c3ac6de52a"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.299303 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0d707fad-0945-4d95-a1ac-3fcf77e60566/ovn-northd/0.log" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.299386 4835 generic.go:334] "Generic (PLEG): container finished" podID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerID="49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6" exitCode=139 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.299530 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0d707fad-0945-4d95-a1ac-3fcf77e60566","Type":"ContainerDied","Data":"49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.302529 4835 generic.go:334] "Generic (PLEG): container finished" podID="97481534-5529-4faf-849e-3b9ce6f44330" containerID="bb23d6c770954947fd692db5184ef7a0ddbac8cc80969a9c43904cedada643f1" exitCode=1 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.302572 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlrqj" event={"ID":"97481534-5529-4faf-849e-3b9ce6f44330","Type":"ContainerDied","Data":"bb23d6c770954947fd692db5184ef7a0ddbac8cc80969a9c43904cedada643f1"} Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:41.344976 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:41.354961 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:41.356822 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:41.356885 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="galera" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.358502 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": dial tcp 10.217.0.210:3000: connect: connection refused" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.360350 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.366540 4835 scope.go:117] "RemoveContainer" containerID="c9818cba7c85de5bf5bfa7fb9d06ea721d490c0155226c0d8da66a729c176e41" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.367929 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.446400 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.180:9292/healthcheck\": dial tcp 10.217.0.180:9292: connect: connection refused" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.446853 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.180:9292/healthcheck\": dial tcp 10.217.0.180:9292: connect: connection refused" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:41.672806 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.107:11211: connect: connection refused" Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:41.741315 4835 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:41.741742 4835 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts podName:97481534-5529-4faf-849e-3b9ce6f44330 nodeName:}" failed. No retries permitted until 2026-01-29 16:02:45.741726902 +0000 UTC m=+2067.934770556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts") pod "root-account-create-update-wlrqj" (UID: "97481534-5529-4faf-849e-3b9ce6f44330") : configmap "openstack-scripts" not found Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.051832 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.052512 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.052972 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.052997 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.058254 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.059323 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.060905 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.060958 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.314866 4835 generic.go:334] "Generic (PLEG): container finished" podID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerID="3ee09d0ab84f95e0d34e071b677b6bc38c1151190177df01369eb3c4ce330847" exitCode=0 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.314956 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"427d05dd-22a3-4557-b9b3-9b5a39ea4662","Type":"ContainerDied","Data":"3ee09d0ab84f95e0d34e071b677b6bc38c1151190177df01369eb3c4ce330847"} Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.318953 4835 generic.go:334] "Generic (PLEG): container finished" podID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerID="928d4065c80f7caf504725e2e34addde8c39c448d114f8f283af06dd5607f106" exitCode=0 Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.319049 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"372d1dd7-8b24-4652-b6bb-bce75af50e4e","Type":"ContainerDied","Data":"928d4065c80f7caf504725e2e34addde8c39c448d114f8f283af06dd5607f106"} Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.383927 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1 is running failed: container process not found" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.384332 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1 is running failed: container process not found" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.384744 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1 is running failed: container process not found" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.384789 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerName="nova-cell0-conductor-conductor" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.502456 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" path="/var/lib/kubelet/pods/2798812a-4bab-4c0e-a55d-f4c6232e67ae/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.503266 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" path="/var/lib/kubelet/pods/7276d970-f119-4ab0-9e02-69bca14fc8db/volumes" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.863624 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6677554fc9-4vpbh" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.153:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:42.863638 4835 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6677554fc9-4vpbh" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.153:8080/healthcheck\": dial tcp 10.217.0.153:8080: i/o timeout" Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.948413 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8 is running failed: container process not found" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.948671 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8 is running failed: container process not found" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.948856 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8 is running failed: container process not found" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 29 16:02:43 crc kubenswrapper[4835]: E0129 16:02:42.948883 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerName="nova-cell1-conductor-conductor" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.753614 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.765075 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.776440 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw9db\" (UniqueName: \"kubernetes.io/projected/2f4896cd-93c6-4708-8f62-f94cedda3fc0-kube-api-access-tw9db\") pod \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.776529 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-config-data\") pod \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.776614 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-combined-ca-bundle\") pod \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\" (UID: \"2f4896cd-93c6-4708-8f62-f94cedda3fc0\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.789853 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f4896cd-93c6-4708-8f62-f94cedda3fc0-kube-api-access-tw9db" (OuterVolumeSpecName: "kube-api-access-tw9db") pod "2f4896cd-93c6-4708-8f62-f94cedda3fc0" (UID: "2f4896cd-93c6-4708-8f62-f94cedda3fc0"). InnerVolumeSpecName "kube-api-access-tw9db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.826664 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f4896cd-93c6-4708-8f62-f94cedda3fc0" (UID: "2f4896cd-93c6-4708-8f62-f94cedda3fc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.830428 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-config-data" (OuterVolumeSpecName: "config-data") pod "2f4896cd-93c6-4708-8f62-f94cedda3fc0" (UID: "2f4896cd-93c6-4708-8f62-f94cedda3fc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878033 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-config-data\") pod \"4af5e897-6640-4718-a326-3d241fc96840\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878118 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-public-tls-certs\") pod \"4af5e897-6640-4718-a326-3d241fc96840\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878149 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn65q\" (UniqueName: \"kubernetes.io/projected/4af5e897-6640-4718-a326-3d241fc96840-kube-api-access-wn65q\") pod \"4af5e897-6640-4718-a326-3d241fc96840\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878171 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4af5e897-6640-4718-a326-3d241fc96840-logs\") pod \"4af5e897-6640-4718-a326-3d241fc96840\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878191 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-internal-tls-certs\") pod \"4af5e897-6640-4718-a326-3d241fc96840\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878248 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-combined-ca-bundle\") pod \"4af5e897-6640-4718-a326-3d241fc96840\" (UID: \"4af5e897-6640-4718-a326-3d241fc96840\") " Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878781 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878795 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw9db\" (UniqueName: \"kubernetes.io/projected/2f4896cd-93c6-4708-8f62-f94cedda3fc0-kube-api-access-tw9db\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.878805 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f4896cd-93c6-4708-8f62-f94cedda3fc0-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.879073 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af5e897-6640-4718-a326-3d241fc96840-logs" (OuterVolumeSpecName: "logs") pod "4af5e897-6640-4718-a326-3d241fc96840" (UID: "4af5e897-6640-4718-a326-3d241fc96840"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.889831 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af5e897-6640-4718-a326-3d241fc96840-kube-api-access-wn65q" (OuterVolumeSpecName: "kube-api-access-wn65q") pod "4af5e897-6640-4718-a326-3d241fc96840" (UID: "4af5e897-6640-4718-a326-3d241fc96840"). InnerVolumeSpecName "kube-api-access-wn65q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.903626 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4af5e897-6640-4718-a326-3d241fc96840" (UID: "4af5e897-6640-4718-a326-3d241fc96840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.942101 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4af5e897-6640-4718-a326-3d241fc96840" (UID: "4af5e897-6640-4718-a326-3d241fc96840"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.958958 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4af5e897-6640-4718-a326-3d241fc96840" (UID: "4af5e897-6640-4718-a326-3d241fc96840"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.959611 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-config-data" (OuterVolumeSpecName: "config-data") pod "4af5e897-6640-4718-a326-3d241fc96840" (UID: "4af5e897-6640-4718-a326-3d241fc96840"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.979988 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.980018 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.980028 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn65q\" (UniqueName: \"kubernetes.io/projected/4af5e897-6640-4718-a326-3d241fc96840-kube-api-access-wn65q\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.980037 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4af5e897-6640-4718-a326-3d241fc96840-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.980046 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:43 crc kubenswrapper[4835]: I0129 16:02:43.980056 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4af5e897-6640-4718-a326-3d241fc96840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.013038 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.051722 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.066961 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.070513 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.078934 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081523 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-logs\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081579 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9bhb\" (UniqueName: \"kubernetes.io/projected/94d5afdb-a473-4923-a414-a7efb63d80cc-kube-api-access-b9bhb\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081609 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-internal-tls-certs\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081654 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-internal-tls-certs\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081696 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58r4n\" (UniqueName: \"kubernetes.io/projected/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-kube-api-access-58r4n\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081756 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-credential-keys\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081802 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-combined-ca-bundle\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081892 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081931 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-httpd-run\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081960 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-config-data\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.081990 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-config-data\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.082023 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-combined-ca-bundle\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.082061 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-scripts\") pod \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\" (UID: \"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.082090 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-fernet-keys\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.082110 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-scripts\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.082155 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-public-tls-certs\") pod \"94d5afdb-a473-4923-a414-a7efb63d80cc\" (UID: \"94d5afdb-a473-4923-a414-a7efb63d80cc\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.085105 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.104377 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.105127 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-logs" (OuterVolumeSpecName: "logs") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.120925 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-kube-api-access-58r4n" (OuterVolumeSpecName: "kube-api-access-58r4n") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "kube-api-access-58r4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.121040 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.121103 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d5afdb-a473-4923-a414-a7efb63d80cc-kube-api-access-b9bhb" (OuterVolumeSpecName: "kube-api-access-b9bhb") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "kube-api-access-b9bhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.122238 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-scripts" (OuterVolumeSpecName: "scripts") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.123274 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-scripts" (OuterVolumeSpecName: "scripts") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.130442 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.141594 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.197896 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.197946 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts\") pod \"97481534-5529-4faf-849e-3b9ce6f44330\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198041 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-config-data\") pod \"8d24538c-d668-497c-b4cd-4bff633b43fb\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198064 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsvgn\" (UniqueName: \"kubernetes.io/projected/97481534-5529-4faf-849e-3b9ce6f44330-kube-api-access-rsvgn\") pod \"97481534-5529-4faf-849e-3b9ce6f44330\" (UID: \"97481534-5529-4faf-849e-3b9ce6f44330\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198094 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-combined-ca-bundle\") pod \"36560cf0-b011-4fde-8cc0-00d63f2a141a\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198130 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-erlang-cookie\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198178 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-confd\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198203 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-server-conf\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198255 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372d1dd7-8b24-4652-b6bb-bce75af50e4e-pod-info\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198273 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-kolla-config\") pod \"8d24538c-d668-497c-b4cd-4bff633b43fb\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198289 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpnh4\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-kube-api-access-hpnh4\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198323 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-plugins\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198338 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-tls\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198354 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r2gc\" (UniqueName: \"kubernetes.io/projected/8d24538c-d668-497c-b4cd-4bff633b43fb-kube-api-access-9r2gc\") pod \"8d24538c-d668-497c-b4cd-4bff633b43fb\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198407 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-plugins-conf\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198427 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198475 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7gw\" (UniqueName: \"kubernetes.io/projected/36560cf0-b011-4fde-8cc0-00d63f2a141a-kube-api-access-mg7gw\") pod \"36560cf0-b011-4fde-8cc0-00d63f2a141a\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198494 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-config-data\") pod \"36560cf0-b011-4fde-8cc0-00d63f2a141a\" (UID: \"36560cf0-b011-4fde-8cc0-00d63f2a141a\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198512 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372d1dd7-8b24-4652-b6bb-bce75af50e4e-erlang-cookie-secret\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198539 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-memcached-tls-certs\") pod \"8d24538c-d668-497c-b4cd-4bff633b43fb\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198558 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-combined-ca-bundle\") pod \"8d24538c-d668-497c-b4cd-4bff633b43fb\" (UID: \"8d24538c-d668-497c-b4cd-4bff633b43fb\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198925 4835 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-logs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198936 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9bhb\" (UniqueName: \"kubernetes.io/projected/94d5afdb-a473-4923-a414-a7efb63d80cc-kube-api-access-b9bhb\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198947 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58r4n\" (UniqueName: \"kubernetes.io/projected/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-kube-api-access-58r4n\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198957 4835 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198975 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198984 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.198992 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.199001 4835 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.199009 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.200157 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.200764 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-config-data" (OuterVolumeSpecName: "config-data") pod "8d24538c-d668-497c-b4cd-4bff633b43fb" (UID: "8d24538c-d668-497c-b4cd-4bff633b43fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.209833 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97481534-5529-4faf-849e-3b9ce6f44330" (UID: "97481534-5529-4faf-849e-3b9ce6f44330"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.221915 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "8d24538c-d668-497c-b4cd-4bff633b43fb" (UID: "8d24538c-d668-497c-b4cd-4bff633b43fb"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.222863 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372d1dd7-8b24-4652-b6bb-bce75af50e4e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.223516 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.225195 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.238275 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36560cf0-b011-4fde-8cc0-00d63f2a141a-kube-api-access-mg7gw" (OuterVolumeSpecName: "kube-api-access-mg7gw") pod "36560cf0-b011-4fde-8cc0-00d63f2a141a" (UID: "36560cf0-b011-4fde-8cc0-00d63f2a141a"). InnerVolumeSpecName "kube-api-access-mg7gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.246742 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.247686 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0d707fad-0945-4d95-a1ac-3fcf77e60566/ovn-northd/0.log" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.247819 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.254873 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d24538c-d668-497c-b4cd-4bff633b43fb-kube-api-access-9r2gc" (OuterVolumeSpecName: "kube-api-access-9r2gc") pod "8d24538c-d668-497c-b4cd-4bff633b43fb" (UID: "8d24538c-d668-497c-b4cd-4bff633b43fb"). InnerVolumeSpecName "kube-api-access-9r2gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.268771 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97481534-5529-4faf-849e-3b9ce6f44330-kube-api-access-rsvgn" (OuterVolumeSpecName: "kube-api-access-rsvgn") pod "97481534-5529-4faf-849e-3b9ce6f44330" (UID: "97481534-5529-4faf-849e-3b9ce6f44330"). InnerVolumeSpecName "kube-api-access-rsvgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.289681 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-kube-api-access-hpnh4" (OuterVolumeSpecName: "kube-api-access-hpnh4") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "kube-api-access-hpnh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.295024 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/372d1dd7-8b24-4652-b6bb-bce75af50e4e-pod-info" (OuterVolumeSpecName: "pod-info") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304123 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-northd-tls-certs\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304223 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-combined-ca-bundle\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-config\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304367 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-255wc\" (UniqueName: \"kubernetes.io/projected/0d707fad-0945-4d95-a1ac-3fcf77e60566-kube-api-access-255wc\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304392 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-metrics-certs-tls-certs\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304427 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-scripts\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.304600 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-rundir\") pod \"0d707fad-0945-4d95-a1ac-3fcf77e60566\" (UID: \"0d707fad-0945-4d95-a1ac-3fcf77e60566\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305037 4835 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305073 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305088 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7gw\" (UniqueName: \"kubernetes.io/projected/36560cf0-b011-4fde-8cc0-00d63f2a141a-kube-api-access-mg7gw\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305101 4835 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372d1dd7-8b24-4652-b6bb-bce75af50e4e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305114 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97481534-5529-4faf-849e-3b9ce6f44330-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305126 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305138 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsvgn\" (UniqueName: \"kubernetes.io/projected/97481534-5529-4faf-849e-3b9ce6f44330-kube-api-access-rsvgn\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305155 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305167 4835 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372d1dd7-8b24-4652-b6bb-bce75af50e4e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305179 4835 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8d24538c-d668-497c-b4cd-4bff633b43fb-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305190 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpnh4\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-kube-api-access-hpnh4\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305202 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.305214 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r2gc\" (UniqueName: \"kubernetes.io/projected/8d24538c-d668-497c-b4cd-4bff633b43fb-kube-api-access-9r2gc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.323130 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.323612 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-config" (OuterVolumeSpecName: "config") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.323681 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-scripts" (OuterVolumeSpecName: "scripts") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.323989 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.329662 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.345700 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d707fad-0945-4d95-a1ac-3fcf77e60566-kube-api-access-255wc" (OuterVolumeSpecName: "kube-api-access-255wc") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "kube-api-access-255wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.374362 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0d707fad-0945-4d95-a1ac-3fcf77e60566/ovn-northd/0.log" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.374654 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.374688 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.374765 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0d707fad-0945-4d95-a1ac-3fcf77e60566","Type":"ContainerDied","Data":"216bb6121f2ef28e5ca30f14201a902a3645f4bfbf57e8aeae8e438a3c82ad61"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.374845 4835 scope.go:117] "RemoveContainer" containerID="f8216ff353bcb5dadd1afc0099c19fd068e4e203e3468e4a67e19d4e8e285ff2" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.409804 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-erlang-cookie\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.409891 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-plugins\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.409972 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410062 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/427d05dd-22a3-4557-b9b3-9b5a39ea4662-pod-info\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410090 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-plugins-conf\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410132 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nncc5\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-kube-api-access-nncc5\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410273 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410385 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-server-conf\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410436 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-tls\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410494 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-confd\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.410567 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/427d05dd-22a3-4557-b9b3-9b5a39ea4662-erlang-cookie-secret\") pod \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\" (UID: \"427d05dd-22a3-4557-b9b3-9b5a39ea4662\") " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.411483 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.411518 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-255wc\" (UniqueName: \"kubernetes.io/projected/0d707fad-0945-4d95-a1ac-3fcf77e60566-kube-api-access-255wc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.411534 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d707fad-0945-4d95-a1ac-3fcf77e60566-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.411546 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.411560 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.411578 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.413944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.421427 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.421865 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.424654 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.428891 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.435956 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-config-data" (OuterVolumeSpecName: "config-data") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.436582 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/427d05dd-22a3-4557-b9b3-9b5a39ea4662-pod-info" (OuterVolumeSpecName: "pod-info") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.441569 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/427d05dd-22a3-4557-b9b3-9b5a39ea4662-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.442720 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-kube-api-access-nncc5" (OuterVolumeSpecName: "kube-api-access-nncc5") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "kube-api-access-nncc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.443005 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.444492 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.445247 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eed5f4db-cfa8-46d1-a62b-ed9acc5394f2","Type":"ContainerDied","Data":"bc4ca779c8ab9be9e31dcfbc8c301cc4c6547397410df943c3fbd9a3ecb733ac"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.448023 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.462920 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wlrqj" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.462886 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wlrqj" event={"ID":"97481534-5529-4faf-849e-3b9ce6f44330","Type":"ContainerDied","Data":"4261480e78efd75178ad1bf323ae9cd13c89b4efd624346fdb742505ec1a3cc1"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.469040 4835 scope.go:117] "RemoveContainer" containerID="49ee7f757fe797682edfe7b9346a06aed94787d6b7e6f435bfafc6528bff65f6" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.470949 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"427d05dd-22a3-4557-b9b3-9b5a39ea4662","Type":"ContainerDied","Data":"3d3f81f15872f435d1a42b9d08bae938c58195ed97563b06456f5d0455608a91"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.471085 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.480889 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.482904 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"372d1dd7-8b24-4652-b6bb-bce75af50e4e","Type":"ContainerDied","Data":"12a63b3856c819c9803c120fbbc205854c80b668711cefa49c884bde02925bce"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.483005 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.487194 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.491062 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d24538c-d668-497c-b4cd-4bff633b43fb" (UID: "8d24538c-d668-497c-b4cd-4bff633b43fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.491608 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f9f4f755f-g8j9g" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.494601 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.497747 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.498203 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.501646 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.507350 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94d5afdb-a473-4923-a414-a7efb63d80cc" (UID: "94d5afdb-a473-4923-a414-a7efb63d80cc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.513133 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data" (OuterVolumeSpecName: "config-data") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.517950 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.518656 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data\") pod \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\" (UID: \"372d1dd7-8b24-4652-b6bb-bce75af50e4e\") " Jan 29 16:02:44 crc kubenswrapper[4835]: W0129 16:02:44.518780 4835 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/372d1dd7-8b24-4652-b6bb-bce75af50e4e/volumes/kubernetes.io~configmap/config-data Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.518807 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data" (OuterVolumeSpecName: "config-data") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521324 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521358 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521373 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521409 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521422 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521432 4835 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/427d05dd-22a3-4557-b9b3-9b5a39ea4662-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521446 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521531 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521570 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521584 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521623 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521661 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521673 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94d5afdb-a473-4923-a414-a7efb63d80cc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521684 4835 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/427d05dd-22a3-4557-b9b3-9b5a39ea4662-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521699 4835 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521732 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nncc5\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-kube-api-access-nncc5\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.521744 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.531671 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-server-conf" (OuterVolumeSpecName: "server-conf") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.535912 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.536988 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data" (OuterVolumeSpecName: "config-data") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.554712 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-config-data" (OuterVolumeSpecName: "config-data") pod "36560cf0-b011-4fde-8cc0-00d63f2a141a" (UID: "36560cf0-b011-4fde-8cc0-00d63f2a141a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.555321 4835 scope.go:117] "RemoveContainer" containerID="da115e8709897068c943f4fb977b6a69f1f71d0f9c94591378a80a37d0e541cb" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.555864 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.567626 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wlrqj"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.567660 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f9f4f755f-g8j9g" event={"ID":"94d5afdb-a473-4923-a414-a7efb63d80cc","Type":"ContainerDied","Data":"4951ed5c7191fae66a00ed13ac8412dceb569745f3e377377570992027b1281e"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.567685 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4af5e897-6640-4718-a326-3d241fc96840","Type":"ContainerDied","Data":"84479377a7f192f2aa3dad9c99127a91276128178a9fdbef09d70a7009e1921a"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.567701 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2f4896cd-93c6-4708-8f62-f94cedda3fc0","Type":"ContainerDied","Data":"96be128fc97936ffec97484a2fc682323a7ecb55596704d96d77fc5a7a7ff66f"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.567715 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8d24538c-d668-497c-b4cd-4bff633b43fb","Type":"ContainerDied","Data":"3667df7ace3777ee72c552a8804ff519d80c14420f823a2d8d224d5f7aba52fd"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.567726 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"36560cf0-b011-4fde-8cc0-00d63f2a141a","Type":"ContainerDied","Data":"a1e4f2e0924bcc0323e62eb442dd1a9e356161418aa5ab15239b3710d1c32d1f"} Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.568916 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wlrqj"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.580705 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-server-conf" (OuterVolumeSpecName: "server-conf") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.584425 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-config-data" (OuterVolumeSpecName: "config-data") pod "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" (UID: "eed5f4db-cfa8-46d1-a62b-ed9acc5394f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.584721 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "8d24538c-d668-497c-b4cd-4bff633b43fb" (UID: "8d24538c-d668-497c-b4cd-4bff633b43fb"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.585685 4835 scope.go:117] "RemoveContainer" containerID="4bddd694b3a71f4c157e46ee4469e5cde7015e8f59a7762ccb3f61c557f57358" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.585764 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36560cf0-b011-4fde-8cc0-00d63f2a141a" (UID: "36560cf0-b011-4fde-8cc0-00d63f2a141a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.589442 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.597153 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.605392 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.607576 4835 scope.go:117] "RemoveContainer" containerID="bb23d6c770954947fd692db5184ef7a0ddbac8cc80969a9c43904cedada643f1" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.610042 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.611363 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.611632 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "372d1dd7-8b24-4652-b6bb-bce75af50e4e" (UID: "372d1dd7-8b24-4652-b6bb-bce75af50e4e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.616581 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "0d707fad-0945-4d95-a1ac-3fcf77e60566" (UID: "0d707fad-0945-4d95-a1ac-3fcf77e60566"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622806 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622840 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622851 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622864 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372d1dd7-8b24-4652-b6bb-bce75af50e4e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622880 4835 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372d1dd7-8b24-4652-b6bb-bce75af50e4e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622890 4835 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/427d05dd-22a3-4557-b9b3-9b5a39ea4662-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622900 4835 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622911 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622921 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36560cf0-b011-4fde-8cc0-00d63f2a141a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622930 4835 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d24538c-d668-497c-b4cd-4bff633b43fb-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622943 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.622952 4835 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d707fad-0945-4d95-a1ac-3fcf77e60566-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.639930 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "427d05dd-22a3-4557-b9b3-9b5a39ea4662" (UID: "427d05dd-22a3-4557-b9b3-9b5a39ea4662"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.641243 4835 scope.go:117] "RemoveContainer" containerID="3ee09d0ab84f95e0d34e071b677b6bc38c1151190177df01369eb3c4ce330847" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.658570 4835 scope.go:117] "RemoveContainer" containerID="64169b97629fcadaf1de1bd1f7d89edfecc0da8978b11c86487715e1aaeefbfb" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.724756 4835 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/427d05dd-22a3-4557-b9b3-9b5a39ea4662-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.913004 4835 scope.go:117] "RemoveContainer" containerID="928d4065c80f7caf504725e2e34addde8c39c448d114f8f283af06dd5607f106" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.946024 4835 scope.go:117] "RemoveContainer" containerID="b8597f16a2219b2b8c34c3748733464d2ee66f76e90cbef7f856667a18a6d90d" Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.970268 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.982345 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.989387 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:02:44 crc kubenswrapper[4835]: I0129 16:02:44.997265 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.006836 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.013273 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.024829 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5f9f4f755f-g8j9g"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.031092 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5f9f4f755f-g8j9g"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.032457 4835 scope.go:117] "RemoveContainer" containerID="0b0c35e1ffcc8a8104e5ff789a24cd4d76e701e7faa753d0ffc606c3ac6de52a" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.046021 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.057361 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.058847 4835 scope.go:117] "RemoveContainer" containerID="5b5c6de4e9f1b971bd029a229e3ed1243080ca7b0a00aae6a96f1b1534ec82d6" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.070226 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.076602 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.086396 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.092736 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.094116 4835 scope.go:117] "RemoveContainer" containerID="0c1e86f6e8c0ad0573e87b6e82f94e3f2d9b9cd44533cbf25cafdc4662548921" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.119838 4835 scope.go:117] "RemoveContainer" containerID="33395467a2f0bb995d43b349012421eccd82dfc360b83826a8cd8f8d37d612e1" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.138272 4835 scope.go:117] "RemoveContainer" containerID="f13f382fce2fe07b9bdea9fc7a6cf5b29127c26d9f07750ff6f018c3213666cb" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.168411 4835 scope.go:117] "RemoveContainer" containerID="3a52f76a23beb16a7257e034487cee2c8fb841e07a1a8eff0df033a37ceeedc8" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.555773 4835 generic.go:334] "Generic (PLEG): container finished" podID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerID="51cd8eb067ac6ab7e38ed9b480cf21bc4f6e8171ac2b87b3dc825503ca0a087c" exitCode=0 Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.555833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerDied","Data":"51cd8eb067ac6ab7e38ed9b480cf21bc4f6e8171ac2b87b3dc825503ca0a087c"} Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.555857 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c15e330a-80e7-4940-b1ec-f63c07d35310","Type":"ContainerDied","Data":"dd4b266c10ade40caf71f506ea441368ab25991c57b67db88ca35c538bfea73e"} Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.555868 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd4b266c10ade40caf71f506ea441368ab25991c57b67db88ca35c538bfea73e" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.621281 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.743692 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-run-httpd\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.743812 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-combined-ca-bundle\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.743870 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-config-data\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.743889 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-ceilometer-tls-certs\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.743946 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-scripts\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.743969 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-sg-core-conf-yaml\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.744033 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-log-httpd\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.744059 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qf2v\" (UniqueName: \"kubernetes.io/projected/c15e330a-80e7-4940-b1ec-f63c07d35310-kube-api-access-5qf2v\") pod \"c15e330a-80e7-4940-b1ec-f63c07d35310\" (UID: \"c15e330a-80e7-4940-b1ec-f63c07d35310\") " Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.744847 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.745257 4835 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.745425 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.749958 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15e330a-80e7-4940-b1ec-f63c07d35310-kube-api-access-5qf2v" (OuterVolumeSpecName: "kube-api-access-5qf2v") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "kube-api-access-5qf2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.751238 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-scripts" (OuterVolumeSpecName: "scripts") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.771662 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.788211 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.819300 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-config-data" (OuterVolumeSpecName: "config-data") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.828896 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c15e330a-80e7-4940-b1ec-f63c07d35310" (UID: "c15e330a-80e7-4940-b1ec-f63c07d35310"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855877 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855921 4835 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855939 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qf2v\" (UniqueName: \"kubernetes.io/projected/c15e330a-80e7-4940-b1ec-f63c07d35310-kube-api-access-5qf2v\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855957 4835 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c15e330a-80e7-4940-b1ec-f63c07d35310-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855972 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855984 4835 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:45 crc kubenswrapper[4835]: I0129 16:02:45.855995 4835 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c15e330a-80e7-4940-b1ec-f63c07d35310-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.502725 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" path="/var/lib/kubelet/pods/0d707fad-0945-4d95-a1ac-3fcf77e60566/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.503419 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" path="/var/lib/kubelet/pods/2f4896cd-93c6-4708-8f62-f94cedda3fc0/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.504049 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" path="/var/lib/kubelet/pods/36560cf0-b011-4fde-8cc0-00d63f2a141a/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.505587 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" path="/var/lib/kubelet/pods/372d1dd7-8b24-4652-b6bb-bce75af50e4e/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.506527 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" path="/var/lib/kubelet/pods/427d05dd-22a3-4557-b9b3-9b5a39ea4662/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.507818 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af5e897-6640-4718-a326-3d241fc96840" path="/var/lib/kubelet/pods/4af5e897-6640-4718-a326-3d241fc96840/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.508408 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" path="/var/lib/kubelet/pods/8d24538c-d668-497c-b4cd-4bff633b43fb/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.508886 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d5afdb-a473-4923-a414-a7efb63d80cc" path="/var/lib/kubelet/pods/94d5afdb-a473-4923-a414-a7efb63d80cc/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.509829 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97481534-5529-4faf-849e-3b9ce6f44330" path="/var/lib/kubelet/pods/97481534-5529-4faf-849e-3b9ce6f44330/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.510381 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" path="/var/lib/kubelet/pods/eed5f4db-cfa8-46d1-a62b-ed9acc5394f2/volumes" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.563191 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.588862 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:02:46 crc kubenswrapper[4835]: I0129 16:02:46.598404 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.052128 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.052514 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.052809 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.052848 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.053633 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.054923 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.055960 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:47 crc kubenswrapper[4835]: E0129 16:02:47.055985 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:02:48 crc kubenswrapper[4835]: I0129 16:02:48.504382 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" path="/var/lib/kubelet/pods/c15e330a-80e7-4940-b1ec-f63c07d35310/volumes" Jan 29 16:02:50 crc kubenswrapper[4835]: E0129 16:02:50.154931 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:50 crc kubenswrapper[4835]: E0129 16:02:50.159271 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:50 crc kubenswrapper[4835]: E0129 16:02:50.161762 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:50 crc kubenswrapper[4835]: E0129 16:02:50.161810 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="galera" Jan 29 16:02:51 crc kubenswrapper[4835]: E0129 16:02:51.345234 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:51 crc kubenswrapper[4835]: E0129 16:02:51.346943 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:51 crc kubenswrapper[4835]: E0129 16:02:51.348307 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 29 16:02:51 crc kubenswrapper[4835]: E0129 16:02:51.348353 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="galera" Jan 29 16:02:51 crc kubenswrapper[4835]: I0129 16:02:51.614949 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:02:51 crc kubenswrapper[4835]: I0129 16:02:51.615046 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.051596 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.052355 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.052629 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.052888 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.052920 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.053989 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.057762 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:52 crc kubenswrapper[4835]: E0129 16:02:52.057812 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:02:53 crc kubenswrapper[4835]: I0129 16:02:53.626752 4835 generic.go:334] "Generic (PLEG): container finished" podID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" exitCode=0 Jan 29 16:02:53 crc kubenswrapper[4835]: I0129 16:02:53.626833 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05","Type":"ContainerDied","Data":"de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56"} Jan 29 16:02:53 crc kubenswrapper[4835]: I0129 16:02:53.968304 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.088476 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kolla-config\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.088545 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-generated\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.088571 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-default\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.088627 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqh2c\" (UniqueName: \"kubernetes.io/projected/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kube-api-access-bqh2c\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089201 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089370 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089401 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089538 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089603 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-combined-ca-bundle\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089672 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-galera-tls-certs\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.089740 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-operator-scripts\") pod \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\" (UID: \"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05\") " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.090451 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.090799 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.090814 4835 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.090823 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.090832 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.094461 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kube-api-access-bqh2c" (OuterVolumeSpecName: "kube-api-access-bqh2c") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "kube-api-access-bqh2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.098755 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.111630 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.137079 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" (UID: "eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.192636 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.192668 4835 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.192678 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqh2c\" (UniqueName: \"kubernetes.io/projected/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05-kube-api-access-bqh2c\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.192706 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.208828 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.293638 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.638075 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05","Type":"ContainerDied","Data":"7cba8d0e133aadb739ea643f6ade47d113a345698f925cd5992d9f9d01a9d5e0"} Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.638126 4835 scope.go:117] "RemoveContainer" containerID="de457501e5a07c617dbf04732c5d22ec6a0ea955617d21b9724b661a78746f56" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.638167 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.662368 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.666074 4835 scope.go:117] "RemoveContainer" containerID="cab68b1c20e1eeb639188943fc6dfaec2ed8876d8e5fed7d07634355f89cc27c" Jan 29 16:02:54 crc kubenswrapper[4835]: I0129 16:02:54.674050 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.648446 4835 generic.go:334] "Generic (PLEG): container finished" podID="fda576f5-4653-4cad-82de-c734c6c7c867" containerID="d1037751e301323740b1f50d8b6f17e6d9cc7df3812be6e85cd54495a28354ad" exitCode=0 Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.648604 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-646f7c7f9f-p9x92" event={"ID":"fda576f5-4653-4cad-82de-c734c6c7c867","Type":"ContainerDied","Data":"d1037751e301323740b1f50d8b6f17e6d9cc7df3812be6e85cd54495a28354ad"} Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.863316 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920276 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-ovndb-tls-certs\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920404 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmwg\" (UniqueName: \"kubernetes.io/projected/fda576f5-4653-4cad-82de-c734c6c7c867-kube-api-access-kfmwg\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920435 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-internal-tls-certs\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920513 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-httpd-config\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920532 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-combined-ca-bundle\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920581 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-public-tls-certs\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.920615 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-config\") pod \"fda576f5-4653-4cad-82de-c734c6c7c867\" (UID: \"fda576f5-4653-4cad-82de-c734c6c7c867\") " Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.929823 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda576f5-4653-4cad-82de-c734c6c7c867-kube-api-access-kfmwg" (OuterVolumeSpecName: "kube-api-access-kfmwg") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "kube-api-access-kfmwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.931543 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.962145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-config" (OuterVolumeSpecName: "config") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.962901 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.973557 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.985059 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:55 crc kubenswrapper[4835]: I0129 16:02:55.990645 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "fda576f5-4653-4cad-82de-c734c6c7c867" (UID: "fda576f5-4653-4cad-82de-c734c6c7c867"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.022934 4835 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.022982 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfmwg\" (UniqueName: \"kubernetes.io/projected/fda576f5-4653-4cad-82de-c734c6c7c867-kube-api-access-kfmwg\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.022997 4835 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.023013 4835 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.023027 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.023045 4835 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.023056 4835 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fda576f5-4653-4cad-82de-c734c6c7c867-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.501111 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" path="/var/lib/kubelet/pods/eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05/volumes" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.600521 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635335 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-combined-ca-bundle\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635400 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-generated\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635427 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-operator-scripts\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635485 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5jx4\" (UniqueName: \"kubernetes.io/projected/c0b8ebff-50e7-4e05-96f8-099448fe550f-kube-api-access-w5jx4\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635588 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-kolla-config\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635652 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-default\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635690 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-galera-tls-certs\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.635719 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c0b8ebff-50e7-4e05-96f8-099448fe550f\" (UID: \"c0b8ebff-50e7-4e05-96f8-099448fe550f\") " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.637625 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.638041 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.638358 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.640157 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b8ebff-50e7-4e05-96f8-099448fe550f-kube-api-access-w5jx4" (OuterVolumeSpecName: "kube-api-access-w5jx4") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "kube-api-access-w5jx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.640838 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.646449 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "mysql-db") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.660873 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-646f7c7f9f-p9x92" event={"ID":"fda576f5-4653-4cad-82de-c734c6c7c867","Type":"ContainerDied","Data":"588fca982430b9bab96c94e315015a3a0aa40da494abb82aa015b0f738527a0f"} Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.660916 4835 scope.go:117] "RemoveContainer" containerID="af2ec1e150789b7833e77af35517e4ab23bbeb30ea3652132914ac5c56504042" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.661007 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-646f7c7f9f-p9x92" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.661018 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.664180 4835 generic.go:334] "Generic (PLEG): container finished" podID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" exitCode=0 Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.664217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0b8ebff-50e7-4e05-96f8-099448fe550f","Type":"ContainerDied","Data":"9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd"} Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.664241 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c0b8ebff-50e7-4e05-96f8-099448fe550f","Type":"ContainerDied","Data":"07041679b04dff6150ab83b7c3e729ff700117f73d891407d533a99e257d29e0"} Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.664224 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.684289 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c0b8ebff-50e7-4e05-96f8-099448fe550f" (UID: "c0b8ebff-50e7-4e05-96f8-099448fe550f"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.710831 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-646f7c7f9f-p9x92"] Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.713448 4835 scope.go:117] "RemoveContainer" containerID="d1037751e301323740b1f50d8b6f17e6d9cc7df3812be6e85cd54495a28354ad" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.717515 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-646f7c7f9f-p9x92"] Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.730305 4835 scope.go:117] "RemoveContainer" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.736953 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.736992 4835 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.737035 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.737049 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b8ebff-50e7-4e05-96f8-099448fe550f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.737063 4835 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c0b8ebff-50e7-4e05-96f8-099448fe550f-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.737076 4835 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.737086 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5jx4\" (UniqueName: \"kubernetes.io/projected/c0b8ebff-50e7-4e05-96f8-099448fe550f-kube-api-access-w5jx4\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.737097 4835 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c0b8ebff-50e7-4e05-96f8-099448fe550f-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.750367 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.750986 4835 scope.go:117] "RemoveContainer" containerID="159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.771853 4835 scope.go:117] "RemoveContainer" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" Jan 29 16:02:56 crc kubenswrapper[4835]: E0129 16:02:56.779757 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd\": container with ID starting with 9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd not found: ID does not exist" containerID="9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.779800 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd"} err="failed to get container status \"9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd\": rpc error: code = NotFound desc = could not find container \"9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd\": container with ID starting with 9630eba89d9ff70f596b301345fa31ce52fb84a5c0c73bb80e55f7d572cd89bd not found: ID does not exist" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.779824 4835 scope.go:117] "RemoveContainer" containerID="159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87" Jan 29 16:02:56 crc kubenswrapper[4835]: E0129 16:02:56.780309 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87\": container with ID starting with 159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87 not found: ID does not exist" containerID="159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.780347 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87"} err="failed to get container status \"159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87\": rpc error: code = NotFound desc = could not find container \"159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87\": container with ID starting with 159e9420b3092e6fa7f616bed3cbc6bbe4a2def6be992e4146c24d68a2fa7d87 not found: ID does not exist" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.838647 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.995294 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:02:56 crc kubenswrapper[4835]: I0129 16:02:56.999626 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.052695 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.053069 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.053390 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.053453 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.053960 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.055181 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.056814 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:02:57 crc kubenswrapper[4835]: E0129 16:02:57.056841 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:02:58 crc kubenswrapper[4835]: I0129 16:02:58.501152 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" path="/var/lib/kubelet/pods/c0b8ebff-50e7-4e05-96f8-099448fe550f/volumes" Jan 29 16:02:58 crc kubenswrapper[4835]: I0129 16:02:58.502248 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" path="/var/lib/kubelet/pods/fda576f5-4653-4cad-82de-c734c6c7c867/volumes" Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.052415 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.053046 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.053269 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.053665 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.053732 4835 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.054454 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.056131 4835 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 29 16:03:02 crc kubenswrapper[4835]: E0129 16:03:02.056189 4835 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-v49xf" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:03:03 crc kubenswrapper[4835]: I0129 16:03:03.722284 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v49xf_cc934dd0-048d-4030-986d-b76e0b966d73/ovs-vswitchd/0.log" Jan 29 16:03:03 crc kubenswrapper[4835]: I0129 16:03:03.725494 4835 generic.go:334] "Generic (PLEG): container finished" podID="cc934dd0-048d-4030-986d-b76e0b966d73" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" exitCode=137 Jan 29 16:03:03 crc kubenswrapper[4835]: I0129 16:03:03.725578 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerDied","Data":"d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d"} Jan 29 16:03:03 crc kubenswrapper[4835]: I0129 16:03:03.734631 4835 generic.go:334] "Generic (PLEG): container finished" podID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerID="497362da3cdc7b0ac0586533a3f77b36a12f4d7072bc9dcc5da2221d2561e5d3" exitCode=137 Jan 29 16:03:03 crc kubenswrapper[4835]: I0129 16:03:03.734674 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"497362da3cdc7b0ac0586533a3f77b36a12f4d7072bc9dcc5da2221d2561e5d3"} Jan 29 16:03:03 crc kubenswrapper[4835]: I0129 16:03:03.925421 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.040293 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") pod \"cfc7f633-4eb5-481c-9e81-0598c584cc82\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.040652 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-cache\") pod \"cfc7f633-4eb5-481c-9e81-0598c584cc82\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.040714 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-lock\") pod \"cfc7f633-4eb5-481c-9e81-0598c584cc82\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.040806 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc7f633-4eb5-481c-9e81-0598c584cc82-combined-ca-bundle\") pod \"cfc7f633-4eb5-481c-9e81-0598c584cc82\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.040832 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cfc7f633-4eb5-481c-9e81-0598c584cc82\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.040875 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdg2v\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-kube-api-access-cdg2v\") pod \"cfc7f633-4eb5-481c-9e81-0598c584cc82\" (UID: \"cfc7f633-4eb5-481c-9e81-0598c584cc82\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.041411 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-cache" (OuterVolumeSpecName: "cache") pod "cfc7f633-4eb5-481c-9e81-0598c584cc82" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.041520 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-lock" (OuterVolumeSpecName: "lock") pod "cfc7f633-4eb5-481c-9e81-0598c584cc82" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.046607 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-kube-api-access-cdg2v" (OuterVolumeSpecName: "kube-api-access-cdg2v") pod "cfc7f633-4eb5-481c-9e81-0598c584cc82" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82"). InnerVolumeSpecName "kube-api-access-cdg2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.047145 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cfc7f633-4eb5-481c-9e81-0598c584cc82" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.047582 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "cfc7f633-4eb5-481c-9e81-0598c584cc82" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.142313 4835 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.142352 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdg2v\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-kube-api-access-cdg2v\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.142368 4835 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cfc7f633-4eb5-481c-9e81-0598c584cc82-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.142379 4835 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-cache\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.142390 4835 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cfc7f633-4eb5-481c-9e81-0598c584cc82-lock\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.155991 4835 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.244147 4835 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.301812 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc7f633-4eb5-481c-9e81-0598c584cc82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc7f633-4eb5-481c-9e81-0598c584cc82" (UID: "cfc7f633-4eb5-481c-9e81-0598c584cc82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.345145 4835 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc7f633-4eb5-481c-9e81-0598c584cc82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.750713 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cfc7f633-4eb5-481c-9e81-0598c584cc82","Type":"ContainerDied","Data":"98b91e7cdafe41f98b42b92d53c44e5d5e8f3e01a3618edec724696da407f832"} Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.750772 4835 scope.go:117] "RemoveContainer" containerID="497362da3cdc7b0ac0586533a3f77b36a12f4d7072bc9dcc5da2221d2561e5d3" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.751601 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.770503 4835 scope.go:117] "RemoveContainer" containerID="7ca1893c15edffa73e896d53f7933f0f6d17a150336d46d4a2fa86bcaf2cfdab" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.786422 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.792659 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.796137 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v49xf_cc934dd0-048d-4030-986d-b76e0b966d73/ovs-vswitchd/0.log" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.796827 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.796895 4835 scope.go:117] "RemoveContainer" containerID="0e9a70d0d3e573c08f1f05fea5e81d31b394892a0871babe4ea8df23e129049a" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.816656 4835 scope.go:117] "RemoveContainer" containerID="f3c885696e6da2f898a5b8c49181385ab9235ba5ae191d51cf760e67509c466e" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.838220 4835 scope.go:117] "RemoveContainer" containerID="ad890ccba45d4c845dc515bcfedc4d4060e8c99cb10da916ae788e373d48865b" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854578 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwmbc\" (UniqueName: \"kubernetes.io/projected/cc934dd0-048d-4030-986d-b76e0b966d73-kube-api-access-jwmbc\") pod \"cc934dd0-048d-4030-986d-b76e0b966d73\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854624 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-etc-ovs\") pod \"cc934dd0-048d-4030-986d-b76e0b966d73\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854648 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc934dd0-048d-4030-986d-b76e0b966d73-scripts\") pod \"cc934dd0-048d-4030-986d-b76e0b966d73\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854694 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-run\") pod \"cc934dd0-048d-4030-986d-b76e0b966d73\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854717 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-lib\") pod \"cc934dd0-048d-4030-986d-b76e0b966d73\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854737 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-log\") pod \"cc934dd0-048d-4030-986d-b76e0b966d73\" (UID: \"cc934dd0-048d-4030-986d-b76e0b966d73\") " Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854747 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "cc934dd0-048d-4030-986d-b76e0b966d73" (UID: "cc934dd0-048d-4030-986d-b76e0b966d73"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.854801 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-run" (OuterVolumeSpecName: "var-run") pod "cc934dd0-048d-4030-986d-b76e0b966d73" (UID: "cc934dd0-048d-4030-986d-b76e0b966d73"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.855022 4835 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.855036 4835 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.855058 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-log" (OuterVolumeSpecName: "var-log") pod "cc934dd0-048d-4030-986d-b76e0b966d73" (UID: "cc934dd0-048d-4030-986d-b76e0b966d73"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.855076 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-lib" (OuterVolumeSpecName: "var-lib") pod "cc934dd0-048d-4030-986d-b76e0b966d73" (UID: "cc934dd0-048d-4030-986d-b76e0b966d73"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.856410 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc934dd0-048d-4030-986d-b76e0b966d73-scripts" (OuterVolumeSpecName: "scripts") pod "cc934dd0-048d-4030-986d-b76e0b966d73" (UID: "cc934dd0-048d-4030-986d-b76e0b966d73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.857961 4835 scope.go:117] "RemoveContainer" containerID="1cb7a2f57c5116dd63939c6513e77a00a69a8410f016136f8f3113c4eaefb685" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.857995 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc934dd0-048d-4030-986d-b76e0b966d73-kube-api-access-jwmbc" (OuterVolumeSpecName: "kube-api-access-jwmbc") pod "cc934dd0-048d-4030-986d-b76e0b966d73" (UID: "cc934dd0-048d-4030-986d-b76e0b966d73"). InnerVolumeSpecName "kube-api-access-jwmbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.935307 4835 scope.go:117] "RemoveContainer" containerID="cb6a17d1b963b37fe45bdd33c7dcb4010669d718352d5ad5ac97fb197646084b" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.955401 4835 scope.go:117] "RemoveContainer" containerID="f95d08291a9c199d391d69a68653c5c74f0e8e27762eb83acb0bc55e0d66b495" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.956122 4835 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cc934dd0-048d-4030-986d-b76e0b966d73-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.956149 4835 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-lib\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.956162 4835 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cc934dd0-048d-4030-986d-b76e0b966d73-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.956175 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwmbc\" (UniqueName: \"kubernetes.io/projected/cc934dd0-048d-4030-986d-b76e0b966d73-kube-api-access-jwmbc\") on node \"crc\" DevicePath \"\"" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.978937 4835 scope.go:117] "RemoveContainer" containerID="8fd88a76264762f5dddc732bba7fb82a2b2f0305de000da2121400725f7c2608" Jan 29 16:03:04 crc kubenswrapper[4835]: I0129 16:03:04.998184 4835 scope.go:117] "RemoveContainer" containerID="474104e42dc8f9996c3980ed13200c007cff2281dd29f3028ef1e3636cd57e4d" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.014407 4835 scope.go:117] "RemoveContainer" containerID="96326193929bfae4736118da0da7aa31ca82117388834ae8a9d814377a2df442" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.041221 4835 scope.go:117] "RemoveContainer" containerID="ea3655dc8836f21bb4e2d129275d9c13c513c97506d8486147669096ca156531" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.058757 4835 scope.go:117] "RemoveContainer" containerID="29693bd067c29bcef68d5b953a18c82bdc942785a44206abd862b321b7c77fb6" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.076678 4835 scope.go:117] "RemoveContainer" containerID="f47a12bee37ff4ad1416b9e4d6055c6ceddc1d33304721e7f0bcb0d682158067" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.094941 4835 scope.go:117] "RemoveContainer" containerID="7d90e6ba25c8c94653671eba222fef93b9b2446541dcca56e78ee9231c96445a" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.760425 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-v49xf_cc934dd0-048d-4030-986d-b76e0b966d73/ovs-vswitchd/0.log" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.761562 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-v49xf" event={"ID":"cc934dd0-048d-4030-986d-b76e0b966d73","Type":"ContainerDied","Data":"fcdcfef0169feb2cb341f0b0b1062319b54c7f2feaedf66711f36f73efcaa44f"} Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.761616 4835 scope.go:117] "RemoveContainer" containerID="d5a3af320f9c7a416603dd4147595101d6a7405f8aa6b7aa9e69dacc69ea383d" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.761734 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-v49xf" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.792228 4835 scope.go:117] "RemoveContainer" containerID="9b0211d90e49cff11c928cc3ac8f47ac7ec0c4841b945522744ce50da53c7829" Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.795189 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-v49xf"] Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.809447 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-v49xf"] Jan 29 16:03:05 crc kubenswrapper[4835]: I0129 16:03:05.819108 4835 scope.go:117] "RemoveContainer" containerID="1126a1245e4d866681af419f64243fbad52d410573c374d6743124c8332dfde0" Jan 29 16:03:06 crc kubenswrapper[4835]: I0129 16:03:06.500043 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" path="/var/lib/kubelet/pods/cc934dd0-048d-4030-986d-b76e0b966d73/volumes" Jan 29 16:03:06 crc kubenswrapper[4835]: I0129 16:03:06.500755 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" path="/var/lib/kubelet/pods/cfc7f633-4eb5-481c-9e81-0598c584cc82/volumes" Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.614542 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.615094 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.615143 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.615900 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cefdc8553596927103547d09a558f61d60b8753a9e84fa95f7229469a0811e60"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.616027 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://cefdc8553596927103547d09a558f61d60b8753a9e84fa95f7229469a0811e60" gracePeriod=600 Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.881419 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="cefdc8553596927103547d09a558f61d60b8753a9e84fa95f7229469a0811e60" exitCode=0 Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.881489 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"cefdc8553596927103547d09a558f61d60b8753a9e84fa95f7229469a0811e60"} Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.881517 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6"} Jan 29 16:03:21 crc kubenswrapper[4835]: I0129 16:03:21.881534 4835 scope.go:117] "RemoveContainer" containerID="94bf1095a1dbb38a73953b301b03a0ee6197339e69c88bb04689d61cf46dd085" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.401507 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9sdc6"] Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402488 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-central-agent" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402501 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-central-agent" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402522 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402529 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402539 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402545 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-server" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402554 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerName="nova-cell0-conductor-conductor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402561 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerName="nova-cell0-conductor-conductor" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402571 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="mysql-bootstrap" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402577 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="mysql-bootstrap" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402587 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402593 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402600 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="sg-core" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402606 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="sg-core" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402614 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="swift-recon-cron" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402619 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="swift-recon-cron" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402632 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402642 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402654 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-expirer" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402660 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-expirer" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402671 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402679 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402693 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="setup-container" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402700 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="setup-container" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402715 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="proxy-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402723 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="proxy-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402730 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="rabbitmq" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402737 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="rabbitmq" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402748 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="openstack-network-exporter" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402754 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="openstack-network-exporter" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402761 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402767 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-api" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402780 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-reaper" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402786 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-reaper" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402793 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="rabbitmq" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402799 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="rabbitmq" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402807 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" containerName="memcached" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402813 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" containerName="memcached" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402822 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402828 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402838 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402843 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402853 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402858 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-log" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402865 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402871 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402880 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402886 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402895 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402902 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-server" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402913 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402919 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-log" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402929 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="galera" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402935 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="galera" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402943 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97481534-5529-4faf-849e-3b9ce6f44330" containerName="mariadb-account-create-update" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402949 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="97481534-5529-4faf-849e-3b9ce6f44330" containerName="mariadb-account-create-update" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402956 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="rsync" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402965 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="rsync" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402973 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402980 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-log" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.402991 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-updater" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.402998 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-updater" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403008 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403014 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403023 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d5afdb-a473-4923-a414-a7efb63d80cc" containerName="keystone-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403028 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d5afdb-a473-4923-a414-a7efb63d80cc" containerName="keystone-api" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403040 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403045 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403056 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403062 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403071 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="galera" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403076 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="galera" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403082 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="ovn-northd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403088 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="ovn-northd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403098 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403104 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-server" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403110 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-notification-agent" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403116 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-notification-agent" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403125 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97481534-5529-4faf-849e-3b9ce6f44330" containerName="mariadb-account-create-update" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403130 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="97481534-5529-4faf-849e-3b9ce6f44330" containerName="mariadb-account-create-update" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403140 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="setup-container" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403145 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="setup-container" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403152 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" containerName="kube-state-metrics" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403157 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" containerName="kube-state-metrics" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403165 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403171 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403181 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403188 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403194 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403200 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403208 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403214 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-server" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403226 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403232 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-api" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403242 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403248 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-api" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403257 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server-init" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403262 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server-init" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403271 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" containerName="nova-scheduler-scheduler" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403277 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" containerName="nova-scheduler-scheduler" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403283 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403290 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403301 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="mysql-bootstrap" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403307 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="mysql-bootstrap" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403317 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403324 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403331 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerName="nova-cell1-conductor-conductor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403338 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerName="nova-cell1-conductor-conductor" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403347 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403353 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403361 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-metadata" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403367 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-metadata" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403375 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-updater" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403381 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-updater" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.403389 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403397 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403544 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d5afdb-a473-4923-a414-a7efb63d80cc" containerName="keystone-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403562 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403573 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403583 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="openstack-network-exporter" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403589 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403600 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403610 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403618 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403641 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403649 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f4896cd-93c6-4708-8f62-f94cedda3fc0" containerName="nova-cell0-conductor-conductor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403658 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403664 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b8ebff-50e7-4e05-96f8-099448fe550f" containerName="galera" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403670 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="427d05dd-22a3-4557-b9b3-9b5a39ea4662" containerName="rabbitmq" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403678 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="36560cf0-b011-4fde-8cc0-00d63f2a141a" containerName="nova-cell1-conductor-conductor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403685 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="372d1dd7-8b24-4652-b6bb-bce75af50e4e" containerName="rabbitmq" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403693 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403701 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403709 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b578fda-bb52-4dce-8d76-5e8710aa08a8" containerName="kube-state-metrics" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403717 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="proxy-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403728 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403737 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403746 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d24538c-d668-497c-b4cd-4bff633b43fb" containerName="memcached" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403753 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403764 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda576f5-4653-4cad-82de-c734c6c7c867" containerName="neutron-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403771 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0cc44f-ffb4-4d53-8219-a3ce2bd31e05" containerName="galera" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403779 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2798812a-4bab-4c0e-a55d-f4c6232e67ae" containerName="nova-scheduler-scheduler" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403786 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403797 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-metadata" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403804 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaadbdc4-0e1d-4e7a-9f98-26a8ef8799b1" containerName="placement-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403813 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403820 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-replicator" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403826 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed5f4db-cfa8-46d1-a62b-ed9acc5394f2" containerName="glance-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403832 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-notification-agent" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403840 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="97481534-5529-4faf-849e-3b9ce6f44330" containerName="mariadb-account-create-update" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403848 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-updater" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403855 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="sg-core" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403864 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-httpd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403872 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f1c686-0356-4885-b6f2-b5b80279f19f" containerName="proxy-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403878 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-auditor" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403885 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovsdb-server" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403891 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d707fad-0945-4d95-a1ac-3fcf77e60566" containerName="ovn-northd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403897 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="97481534-5529-4faf-849e-3b9ce6f44330" containerName="mariadb-account-create-update" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403907 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="account-reaper" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403914 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="7276d970-f119-4ab0-9e02-69bca14fc8db" containerName="nova-metadata-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403920 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="rsync" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403928 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="swift-recon-cron" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403934 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0429dcb-ecbc-4cdb-8e05-7a8f8f3a4141" containerName="glance-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403942 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc934dd0-048d-4030-986d-b76e0b966d73" containerName="ovs-vswitchd" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403950 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403957 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="container-updater" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403964 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f331dcd-ab0b-4cbc-86dc-5a5c00e08def" containerName="cinder-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403974 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af5e897-6640-4718-a326-3d241fc96840" containerName="nova-api-api" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403982 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc7f633-4eb5-481c-9e81-0598c584cc82" containerName="object-expirer" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.403989 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15e330a-80e7-4940-b1ec-f63c07d35310" containerName="ceilometer-central-agent" Jan 29 16:03:37 crc kubenswrapper[4835]: E0129 16:03:37.404125 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.404132 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="83e68d7c-697f-41e0-94e8-99a198bf89ad" containerName="barbican-api-log" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.405047 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.414219 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sdc6"] Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.417253 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-catalog-content\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.417352 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4psj\" (UniqueName: \"kubernetes.io/projected/2dfe4c4b-245a-42ba-917c-a7bae15f640e-kube-api-access-l4psj\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.417430 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-utilities\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.519189 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-utilities\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.519308 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-catalog-content\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.519346 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4psj\" (UniqueName: \"kubernetes.io/projected/2dfe4c4b-245a-42ba-917c-a7bae15f640e-kube-api-access-l4psj\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.519701 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-utilities\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.519810 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-catalog-content\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.543775 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4psj\" (UniqueName: \"kubernetes.io/projected/2dfe4c4b-245a-42ba-917c-a7bae15f640e-kube-api-access-l4psj\") pod \"redhat-operators-9sdc6\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:37 crc kubenswrapper[4835]: I0129 16:03:37.730237 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:03:38 crc kubenswrapper[4835]: I0129 16:03:38.210449 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9sdc6"] Jan 29 16:03:38 crc kubenswrapper[4835]: W0129 16:03:38.215833 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dfe4c4b_245a_42ba_917c_a7bae15f640e.slice/crio-c07a61f2b9902bd92269f5e42165f99b8a1304b24e1f127596d83b506278cf59 WatchSource:0}: Error finding container c07a61f2b9902bd92269f5e42165f99b8a1304b24e1f127596d83b506278cf59: Status 404 returned error can't find the container with id c07a61f2b9902bd92269f5e42165f99b8a1304b24e1f127596d83b506278cf59 Jan 29 16:03:39 crc kubenswrapper[4835]: I0129 16:03:39.031779 4835 generic.go:334] "Generic (PLEG): container finished" podID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerID="407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20" exitCode=0 Jan 29 16:03:39 crc kubenswrapper[4835]: I0129 16:03:39.032079 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerDied","Data":"407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20"} Jan 29 16:03:39 crc kubenswrapper[4835]: I0129 16:03:39.032197 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerStarted","Data":"c07a61f2b9902bd92269f5e42165f99b8a1304b24e1f127596d83b506278cf59"} Jan 29 16:03:39 crc kubenswrapper[4835]: I0129 16:03:39.034644 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:03:40 crc kubenswrapper[4835]: E0129 16:03:40.185398 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:03:40 crc kubenswrapper[4835]: E0129 16:03:40.185863 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4psj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9sdc6_openshift-marketplace(2dfe4c4b-245a-42ba-917c-a7bae15f640e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:03:40 crc kubenswrapper[4835]: E0129 16:03:40.186982 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:03:41 crc kubenswrapper[4835]: E0129 16:03:41.047688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:03:42 crc kubenswrapper[4835]: I0129 16:03:42.224683 4835 scope.go:117] "RemoveContainer" containerID="4c047d38e9f0fb7d24900c59d7d8a6bf6d2cdca2cca5301978c03cb19499c5bb" Jan 29 16:03:42 crc kubenswrapper[4835]: I0129 16:03:42.262363 4835 scope.go:117] "RemoveContainer" containerID="9c477481de227c5703c9afbeee010830e8cbee3616601ec879c02c3fb4688764" Jan 29 16:03:54 crc kubenswrapper[4835]: E0129 16:03:54.750849 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:03:54 crc kubenswrapper[4835]: E0129 16:03:54.751783 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4psj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9sdc6_openshift-marketplace(2dfe4c4b-245a-42ba-917c-a7bae15f640e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:03:54 crc kubenswrapper[4835]: E0129 16:03:54.753020 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:04:08 crc kubenswrapper[4835]: E0129 16:04:08.499531 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:04:21 crc kubenswrapper[4835]: E0129 16:04:21.639513 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 16:04:21 crc kubenswrapper[4835]: E0129 16:04:21.640335 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4psj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9sdc6_openshift-marketplace(2dfe4c4b-245a-42ba-917c-a7bae15f640e): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:04:21 crc kubenswrapper[4835]: E0129 16:04:21.641576 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:04:33 crc kubenswrapper[4835]: E0129 16:04:33.494525 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:04:42 crc kubenswrapper[4835]: I0129 16:04:42.812757 4835 scope.go:117] "RemoveContainer" containerID="61604143c8675d3e148a9653070e2523fa21b55358537c76a5a550c480dd6178" Jan 29 16:04:42 crc kubenswrapper[4835]: I0129 16:04:42.836824 4835 scope.go:117] "RemoveContainer" containerID="fe35fc9fe00311f2a2a93d003c1ee89fd0b28151e8d0b023e21aa2fb9fec2a37" Jan 29 16:04:42 crc kubenswrapper[4835]: I0129 16:04:42.856648 4835 scope.go:117] "RemoveContainer" containerID="c2c0850d47993a63de0d0ffdab0a6d85f2eb69d829db94ee24b2af2e9703ece8" Jan 29 16:04:42 crc kubenswrapper[4835]: I0129 16:04:42.881790 4835 scope.go:117] "RemoveContainer" containerID="49a71a352ed53533222867f731b5079ce92b278c652a47b895136281569e2554" Jan 29 16:04:42 crc kubenswrapper[4835]: I0129 16:04:42.914734 4835 scope.go:117] "RemoveContainer" containerID="9bf70d93273d409efcf76ced575d4a904a19050bbe879a419756759665374eb8" Jan 29 16:04:42 crc kubenswrapper[4835]: I0129 16:04:42.932596 4835 scope.go:117] "RemoveContainer" containerID="a73ec550d69bf4647d917ba77a530d4586b65fc08a4e29e5b8caebed151f48d7" Jan 29 16:04:47 crc kubenswrapper[4835]: E0129 16:04:47.494165 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:04:58 crc kubenswrapper[4835]: E0129 16:04:58.526958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" Jan 29 16:05:14 crc kubenswrapper[4835]: I0129 16:05:14.791580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerStarted","Data":"04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4"} Jan 29 16:05:15 crc kubenswrapper[4835]: I0129 16:05:15.800940 4835 generic.go:334] "Generic (PLEG): container finished" podID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerID="04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4" exitCode=0 Jan 29 16:05:15 crc kubenswrapper[4835]: I0129 16:05:15.801177 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerDied","Data":"04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4"} Jan 29 16:05:16 crc kubenswrapper[4835]: I0129 16:05:16.816503 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerStarted","Data":"e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36"} Jan 29 16:05:16 crc kubenswrapper[4835]: I0129 16:05:16.841201 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9sdc6" podStartSLOduration=2.292268769 podStartE2EDuration="1m39.841179576s" podCreationTimestamp="2026-01-29 16:03:37 +0000 UTC" firstStartedPulling="2026-01-29 16:03:39.034405239 +0000 UTC m=+2121.227448893" lastFinishedPulling="2026-01-29 16:05:16.583316046 +0000 UTC m=+2218.776359700" observedRunningTime="2026-01-29 16:05:16.835446952 +0000 UTC m=+2219.028490606" watchObservedRunningTime="2026-01-29 16:05:16.841179576 +0000 UTC m=+2219.034223230" Jan 29 16:05:17 crc kubenswrapper[4835]: I0129 16:05:17.731362 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:05:17 crc kubenswrapper[4835]: I0129 16:05:17.731410 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:05:18 crc kubenswrapper[4835]: I0129 16:05:18.774595 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="registry-server" probeResult="failure" output=< Jan 29 16:05:18 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 16:05:18 crc kubenswrapper[4835]: > Jan 29 16:05:21 crc kubenswrapper[4835]: I0129 16:05:21.614446 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:05:21 crc kubenswrapper[4835]: I0129 16:05:21.614914 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:05:27 crc kubenswrapper[4835]: I0129 16:05:27.786143 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:05:27 crc kubenswrapper[4835]: I0129 16:05:27.851949 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:05:28 crc kubenswrapper[4835]: I0129 16:05:28.056039 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9sdc6"] Jan 29 16:05:28 crc kubenswrapper[4835]: I0129 16:05:28.905761 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9sdc6" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="registry-server" containerID="cri-o://e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36" gracePeriod=2 Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.402133 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.471649 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-catalog-content\") pod \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.471812 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4psj\" (UniqueName: \"kubernetes.io/projected/2dfe4c4b-245a-42ba-917c-a7bae15f640e-kube-api-access-l4psj\") pod \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.471864 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-utilities\") pod \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\" (UID: \"2dfe4c4b-245a-42ba-917c-a7bae15f640e\") " Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.473043 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-utilities" (OuterVolumeSpecName: "utilities") pod "2dfe4c4b-245a-42ba-917c-a7bae15f640e" (UID: "2dfe4c4b-245a-42ba-917c-a7bae15f640e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.477242 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfe4c4b-245a-42ba-917c-a7bae15f640e-kube-api-access-l4psj" (OuterVolumeSpecName: "kube-api-access-l4psj") pod "2dfe4c4b-245a-42ba-917c-a7bae15f640e" (UID: "2dfe4c4b-245a-42ba-917c-a7bae15f640e"). InnerVolumeSpecName "kube-api-access-l4psj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.573859 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4psj\" (UniqueName: \"kubernetes.io/projected/2dfe4c4b-245a-42ba-917c-a7bae15f640e-kube-api-access-l4psj\") on node \"crc\" DevicePath \"\"" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.573888 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.611386 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dfe4c4b-245a-42ba-917c-a7bae15f640e" (UID: "2dfe4c4b-245a-42ba-917c-a7bae15f640e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.676016 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dfe4c4b-245a-42ba-917c-a7bae15f640e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.921534 4835 generic.go:334] "Generic (PLEG): container finished" podID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerID="e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36" exitCode=0 Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.921600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerDied","Data":"e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36"} Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.921645 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9sdc6" event={"ID":"2dfe4c4b-245a-42ba-917c-a7bae15f640e","Type":"ContainerDied","Data":"c07a61f2b9902bd92269f5e42165f99b8a1304b24e1f127596d83b506278cf59"} Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.921675 4835 scope.go:117] "RemoveContainer" containerID="e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.921702 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9sdc6" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.953673 4835 scope.go:117] "RemoveContainer" containerID="04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4" Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.971404 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9sdc6"] Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.979158 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9sdc6"] Jan 29 16:05:29 crc kubenswrapper[4835]: I0129 16:05:29.983219 4835 scope.go:117] "RemoveContainer" containerID="407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.000523 4835 scope.go:117] "RemoveContainer" containerID="e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36" Jan 29 16:05:30 crc kubenswrapper[4835]: E0129 16:05:30.001004 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36\": container with ID starting with e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36 not found: ID does not exist" containerID="e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.001060 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36"} err="failed to get container status \"e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36\": rpc error: code = NotFound desc = could not find container \"e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36\": container with ID starting with e5cde46f177c23784cfadc17e160ce4dbe918e02f8110ee4107f24db12ca6c36 not found: ID does not exist" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.001092 4835 scope.go:117] "RemoveContainer" containerID="04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4" Jan 29 16:05:30 crc kubenswrapper[4835]: E0129 16:05:30.001510 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4\": container with ID starting with 04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4 not found: ID does not exist" containerID="04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.001584 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4"} err="failed to get container status \"04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4\": rpc error: code = NotFound desc = could not find container \"04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4\": container with ID starting with 04f3ed18d566b932c3d574b9991af516d39f876b0d505c4f89ef24bcc705d0a4 not found: ID does not exist" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.001612 4835 scope.go:117] "RemoveContainer" containerID="407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20" Jan 29 16:05:30 crc kubenswrapper[4835]: E0129 16:05:30.002036 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20\": container with ID starting with 407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20 not found: ID does not exist" containerID="407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.002077 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20"} err="failed to get container status \"407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20\": rpc error: code = NotFound desc = could not find container \"407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20\": container with ID starting with 407f4c500a23890993d8016b0b62aa137b34d57778caf7755e19789a17030e20 not found: ID does not exist" Jan 29 16:05:30 crc kubenswrapper[4835]: I0129 16:05:30.499977 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" path="/var/lib/kubelet/pods/2dfe4c4b-245a-42ba-917c-a7bae15f640e/volumes" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.031279 4835 scope.go:117] "RemoveContainer" containerID="b7c3a6f248cc10844b4a82f340816ecc29e9dbbc1ba3d95f6ca0198bc2060772" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.049867 4835 scope.go:117] "RemoveContainer" containerID="3b5ac41ff2999bbe5b233b240b25d7210474d04899fa404444643a14fe3a63b8" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.093180 4835 scope.go:117] "RemoveContainer" containerID="a8d8ab1ab23f1b95dac089ca2e28dcbde834129ee761da5435b25a8fb918c2a3" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.108671 4835 scope.go:117] "RemoveContainer" containerID="323900448afe12597912167cec80c6055f1cfa296cfa30f13e8c1ae7e85ab7a5" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.123653 4835 scope.go:117] "RemoveContainer" containerID="cad574b1637439f957e68f49246b39bbce29a258431d631b0bbeebb7357aa84b" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.138169 4835 scope.go:117] "RemoveContainer" containerID="46336d9efd9408b58cac7352d53bd1d6ea40255bd86b5b6949cd3845c7dd711f" Jan 29 16:05:43 crc kubenswrapper[4835]: I0129 16:05:43.157669 4835 scope.go:117] "RemoveContainer" containerID="829edd3eaac3cda072da89c586d4741a25321735c6e777e71fbd084cf47dbc31" Jan 29 16:05:51 crc kubenswrapper[4835]: I0129 16:05:51.614130 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:05:51 crc kubenswrapper[4835]: I0129 16:05:51.614727 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:06:21 crc kubenswrapper[4835]: I0129 16:06:21.614972 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:06:21 crc kubenswrapper[4835]: I0129 16:06:21.615548 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:06:21 crc kubenswrapper[4835]: I0129 16:06:21.615599 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 16:06:21 crc kubenswrapper[4835]: I0129 16:06:21.616223 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:06:21 crc kubenswrapper[4835]: I0129 16:06:21.616278 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" gracePeriod=600 Jan 29 16:06:21 crc kubenswrapper[4835]: E0129 16:06:21.736339 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:06:22 crc kubenswrapper[4835]: I0129 16:06:22.338031 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" exitCode=0 Jan 29 16:06:22 crc kubenswrapper[4835]: I0129 16:06:22.338082 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6"} Jan 29 16:06:22 crc kubenswrapper[4835]: I0129 16:06:22.338119 4835 scope.go:117] "RemoveContainer" containerID="cefdc8553596927103547d09a558f61d60b8753a9e84fa95f7229469a0811e60" Jan 29 16:06:22 crc kubenswrapper[4835]: I0129 16:06:22.338676 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:06:22 crc kubenswrapper[4835]: E0129 16:06:22.338897 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:06:37 crc kubenswrapper[4835]: I0129 16:06:37.491912 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:06:37 crc kubenswrapper[4835]: E0129 16:06:37.492617 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:06:43 crc kubenswrapper[4835]: I0129 16:06:43.221733 4835 scope.go:117] "RemoveContainer" containerID="9a55065beb00a16ee66830598aa3988f30bca1e1c7404cd78f965176724c270d" Jan 29 16:06:43 crc kubenswrapper[4835]: I0129 16:06:43.250630 4835 scope.go:117] "RemoveContainer" containerID="815a87a6496ef6eef743cf2be54f4fb51b0176be5179f4ab3e8d95b561a5a999" Jan 29 16:06:43 crc kubenswrapper[4835]: I0129 16:06:43.269748 4835 scope.go:117] "RemoveContainer" containerID="df1ae7ef1baa5c24dbe5f47dc713db2de333f1ad6a4144d55433a36d695a3716" Jan 29 16:06:48 crc kubenswrapper[4835]: I0129 16:06:48.497283 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:06:48 crc kubenswrapper[4835]: E0129 16:06:48.497776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:07:00 crc kubenswrapper[4835]: I0129 16:07:00.492145 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:07:00 crc kubenswrapper[4835]: E0129 16:07:00.492723 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:07:11 crc kubenswrapper[4835]: I0129 16:07:11.492366 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:07:11 crc kubenswrapper[4835]: E0129 16:07:11.493229 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:07:23 crc kubenswrapper[4835]: I0129 16:07:23.492129 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:07:23 crc kubenswrapper[4835]: E0129 16:07:23.492872 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:07:35 crc kubenswrapper[4835]: I0129 16:07:35.492280 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:07:35 crc kubenswrapper[4835]: E0129 16:07:35.494057 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:07:43 crc kubenswrapper[4835]: I0129 16:07:43.303813 4835 scope.go:117] "RemoveContainer" containerID="8d18ece74a776d4b42d09242899d749d6612e084ed5e056932402fba3ffdfebc" Jan 29 16:07:43 crc kubenswrapper[4835]: I0129 16:07:43.342386 4835 scope.go:117] "RemoveContainer" containerID="efef5c2434c64fc498f94187b303b93ccc7229269b7c6f889de76a7809116c0b" Jan 29 16:07:43 crc kubenswrapper[4835]: I0129 16:07:43.360352 4835 scope.go:117] "RemoveContainer" containerID="fdbbcf4596261b258901f79039e70681333739ff1835fe063896f936a77e3fbb" Jan 29 16:07:43 crc kubenswrapper[4835]: I0129 16:07:43.397017 4835 scope.go:117] "RemoveContainer" containerID="2eb7a757393f8bf70cc9374d3de67c78e1d018eb5513f80056f8cab73670e577" Jan 29 16:07:43 crc kubenswrapper[4835]: I0129 16:07:43.411398 4835 scope.go:117] "RemoveContainer" containerID="1fcaed86489c7aecce529375985bed09cd24f7a06d51eb4a85cd96ac9381dfeb" Jan 29 16:07:43 crc kubenswrapper[4835]: I0129 16:07:43.449976 4835 scope.go:117] "RemoveContainer" containerID="3856a12e79129eb6cfdf75bb770bfb4d042aade7bcef7272efcbfe1248749801" Jan 29 16:07:46 crc kubenswrapper[4835]: I0129 16:07:46.492127 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:07:46 crc kubenswrapper[4835]: E0129 16:07:46.493276 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:08:01 crc kubenswrapper[4835]: I0129 16:08:01.491714 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:08:01 crc kubenswrapper[4835]: E0129 16:08:01.492553 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:08:15 crc kubenswrapper[4835]: I0129 16:08:15.492664 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:08:15 crc kubenswrapper[4835]: E0129 16:08:15.493891 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:08:27 crc kubenswrapper[4835]: I0129 16:08:27.492253 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:08:27 crc kubenswrapper[4835]: E0129 16:08:27.493104 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:08:39 crc kubenswrapper[4835]: I0129 16:08:39.492376 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:08:39 crc kubenswrapper[4835]: E0129 16:08:39.492945 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:08:43 crc kubenswrapper[4835]: I0129 16:08:43.549362 4835 scope.go:117] "RemoveContainer" containerID="68f885bfbd51396b1e518a5b7261073d4617342910af6cce3149e027107f651e" Jan 29 16:08:43 crc kubenswrapper[4835]: I0129 16:08:43.584226 4835 scope.go:117] "RemoveContainer" containerID="51cd8eb067ac6ab7e38ed9b480cf21bc4f6e8171ac2b87b3dc825503ca0a087c" Jan 29 16:08:43 crc kubenswrapper[4835]: I0129 16:08:43.604875 4835 scope.go:117] "RemoveContainer" containerID="ee958556fc843d154af6291223910bcec2cc7e0ab15659ebccce1239613326c9" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.890039 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rldcp"] Jan 29 16:08:45 crc kubenswrapper[4835]: E0129 16:08:45.890888 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="extract-utilities" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.890905 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="extract-utilities" Jan 29 16:08:45 crc kubenswrapper[4835]: E0129 16:08:45.890937 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="extract-content" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.890944 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="extract-content" Jan 29 16:08:45 crc kubenswrapper[4835]: E0129 16:08:45.890958 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="registry-server" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.890968 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="registry-server" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.891227 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfe4c4b-245a-42ba-917c-a7bae15f640e" containerName="registry-server" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.892422 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.911987 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rldcp"] Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.915596 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rq7\" (UniqueName: \"kubernetes.io/projected/9433bafa-f639-49ca-8630-eb978aa94267-kube-api-access-t9rq7\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.915660 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-catalog-content\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:45 crc kubenswrapper[4835]: I0129 16:08:45.915737 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-utilities\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.017282 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-utilities\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.017381 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rq7\" (UniqueName: \"kubernetes.io/projected/9433bafa-f639-49ca-8630-eb978aa94267-kube-api-access-t9rq7\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.017423 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-catalog-content\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.017978 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-catalog-content\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.018199 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-utilities\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.045606 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rq7\" (UniqueName: \"kubernetes.io/projected/9433bafa-f639-49ca-8630-eb978aa94267-kube-api-access-t9rq7\") pod \"certified-operators-rldcp\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.221151 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:08:46 crc kubenswrapper[4835]: I0129 16:08:46.696937 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rldcp"] Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.322548 4835 generic.go:334] "Generic (PLEG): container finished" podID="9433bafa-f639-49ca-8630-eb978aa94267" containerID="82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b" exitCode=0 Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.322689 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rldcp" event={"ID":"9433bafa-f639-49ca-8630-eb978aa94267","Type":"ContainerDied","Data":"82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b"} Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.322870 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rldcp" event={"ID":"9433bafa-f639-49ca-8630-eb978aa94267","Type":"ContainerStarted","Data":"89106e3eec310e877df9a8560653801c0ee0eafb5642886212b6647c93f8aca1"} Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.324738 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:08:47 crc kubenswrapper[4835]: E0129 16:08:47.453299 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:08:47 crc kubenswrapper[4835]: E0129 16:08:47.453548 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9rq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rldcp_openshift-marketplace(9433bafa-f639-49ca-8630-eb978aa94267): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:47 crc kubenswrapper[4835]: E0129 16:08:47.455631 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.677661 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h99hd"] Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.679305 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.693068 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h99hd"] Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.852936 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-utilities\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.853050 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-catalog-content\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.853118 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx98q\" (UniqueName: \"kubernetes.io/projected/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-kube-api-access-fx98q\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.954314 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx98q\" (UniqueName: \"kubernetes.io/projected/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-kube-api-access-fx98q\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.954381 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-utilities\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.954440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-catalog-content\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.954907 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-utilities\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.955020 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-catalog-content\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.987114 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx98q\" (UniqueName: \"kubernetes.io/projected/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-kube-api-access-fx98q\") pod \"community-operators-h99hd\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:47 crc kubenswrapper[4835]: I0129 16:08:47.996562 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:08:48 crc kubenswrapper[4835]: E0129 16:08:48.331753 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:08:48 crc kubenswrapper[4835]: I0129 16:08:48.469630 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h99hd"] Jan 29 16:08:48 crc kubenswrapper[4835]: W0129 16:08:48.471365 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd82c5f9f_4d4f_4970_bcf4_3bd918ea2c67.slice/crio-bcf32902b860934c7826811d58925cf161571acde0b729c34cf73ec605327453 WatchSource:0}: Error finding container bcf32902b860934c7826811d58925cf161571acde0b729c34cf73ec605327453: Status 404 returned error can't find the container with id bcf32902b860934c7826811d58925cf161571acde0b729c34cf73ec605327453 Jan 29 16:08:49 crc kubenswrapper[4835]: I0129 16:08:49.339233 4835 generic.go:334] "Generic (PLEG): container finished" podID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerID="efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b" exitCode=0 Jan 29 16:08:49 crc kubenswrapper[4835]: I0129 16:08:49.339379 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h99hd" event={"ID":"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67","Type":"ContainerDied","Data":"efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b"} Jan 29 16:08:49 crc kubenswrapper[4835]: I0129 16:08:49.339825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h99hd" event={"ID":"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67","Type":"ContainerStarted","Data":"bcf32902b860934c7826811d58925cf161571acde0b729c34cf73ec605327453"} Jan 29 16:08:49 crc kubenswrapper[4835]: E0129 16:08:49.466901 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:08:49 crc kubenswrapper[4835]: E0129 16:08:49.467093 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fx98q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h99hd_openshift-marketplace(d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:49 crc kubenswrapper[4835]: E0129 16:08:49.468380 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:08:50 crc kubenswrapper[4835]: E0129 16:08:50.347923 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:08:52 crc kubenswrapper[4835]: I0129 16:08:52.491983 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:08:52 crc kubenswrapper[4835]: E0129 16:08:52.492557 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:08:59 crc kubenswrapper[4835]: E0129 16:08:59.615760 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:08:59 crc kubenswrapper[4835]: E0129 16:08:59.616437 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9rq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rldcp_openshift-marketplace(9433bafa-f639-49ca-8630-eb978aa94267): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:08:59 crc kubenswrapper[4835]: E0129 16:08:59.617707 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:09:01 crc kubenswrapper[4835]: E0129 16:09:01.631021 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:09:01 crc kubenswrapper[4835]: E0129 16:09:01.631254 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fx98q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h99hd_openshift-marketplace(d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:01 crc kubenswrapper[4835]: E0129 16:09:01.632771 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:09:04 crc kubenswrapper[4835]: I0129 16:09:04.491616 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:09:04 crc kubenswrapper[4835]: E0129 16:09:04.492228 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:09:14 crc kubenswrapper[4835]: E0129 16:09:14.495062 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:09:14 crc kubenswrapper[4835]: E0129 16:09:14.495141 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:09:16 crc kubenswrapper[4835]: I0129 16:09:16.492766 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:09:16 crc kubenswrapper[4835]: E0129 16:09:16.493767 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:09:25 crc kubenswrapper[4835]: E0129 16:09:25.638629 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:09:25 crc kubenswrapper[4835]: E0129 16:09:25.639295 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fx98q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h99hd_openshift-marketplace(d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:25 crc kubenswrapper[4835]: E0129 16:09:25.640792 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:09:25 crc kubenswrapper[4835]: E0129 16:09:25.663365 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:09:25 crc kubenswrapper[4835]: E0129 16:09:25.663581 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9rq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rldcp_openshift-marketplace(9433bafa-f639-49ca-8630-eb978aa94267): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:25 crc kubenswrapper[4835]: E0129 16:09:25.664807 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:09:31 crc kubenswrapper[4835]: I0129 16:09:31.492072 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:09:31 crc kubenswrapper[4835]: E0129 16:09:31.492670 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:09:37 crc kubenswrapper[4835]: E0129 16:09:37.495895 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:09:39 crc kubenswrapper[4835]: E0129 16:09:39.494958 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.491714 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:09:45 crc kubenswrapper[4835]: E0129 16:09:45.492415 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.709733 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rfz8s"] Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.712092 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.725156 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfz8s"] Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.813585 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-catalog-content\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.813647 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-utilities\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.813680 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgzt\" (UniqueName: \"kubernetes.io/projected/a9da2a17-9c18-4043-a3cc-f765c53af31c-kube-api-access-qdgzt\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.915244 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-utilities\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.915305 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgzt\" (UniqueName: \"kubernetes.io/projected/a9da2a17-9c18-4043-a3cc-f765c53af31c-kube-api-access-qdgzt\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.915444 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-catalog-content\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.915850 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-utilities\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.915961 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-catalog-content\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:45 crc kubenswrapper[4835]: I0129 16:09:45.934137 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgzt\" (UniqueName: \"kubernetes.io/projected/a9da2a17-9c18-4043-a3cc-f765c53af31c-kube-api-access-qdgzt\") pod \"redhat-marketplace-rfz8s\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:46 crc kubenswrapper[4835]: I0129 16:09:46.045861 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:09:46 crc kubenswrapper[4835]: I0129 16:09:46.480744 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfz8s"] Jan 29 16:09:46 crc kubenswrapper[4835]: W0129 16:09:46.487205 4835 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9da2a17_9c18_4043_a3cc_f765c53af31c.slice/crio-8266592e89b0d8cc9aa0a8a3cc0e52eb488ae70dc079fe39565c0a6064c8596e WatchSource:0}: Error finding container 8266592e89b0d8cc9aa0a8a3cc0e52eb488ae70dc079fe39565c0a6064c8596e: Status 404 returned error can't find the container with id 8266592e89b0d8cc9aa0a8a3cc0e52eb488ae70dc079fe39565c0a6064c8596e Jan 29 16:09:46 crc kubenswrapper[4835]: I0129 16:09:46.801838 4835 generic.go:334] "Generic (PLEG): container finished" podID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerID="a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5" exitCode=0 Jan 29 16:09:46 crc kubenswrapper[4835]: I0129 16:09:46.802435 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerDied","Data":"a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5"} Jan 29 16:09:46 crc kubenswrapper[4835]: I0129 16:09:46.803115 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerStarted","Data":"8266592e89b0d8cc9aa0a8a3cc0e52eb488ae70dc079fe39565c0a6064c8596e"} Jan 29 16:09:46 crc kubenswrapper[4835]: E0129 16:09:46.933538 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:09:46 crc kubenswrapper[4835]: E0129 16:09:46.933693 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdgzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rfz8s_openshift-marketplace(a9da2a17-9c18-4043-a3cc-f765c53af31c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:09:46 crc kubenswrapper[4835]: E0129 16:09:46.934900 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:09:47 crc kubenswrapper[4835]: E0129 16:09:47.810288 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:09:50 crc kubenswrapper[4835]: E0129 16:09:50.494539 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:09:54 crc kubenswrapper[4835]: E0129 16:09:54.494642 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:10:00 crc kubenswrapper[4835]: I0129 16:10:00.491576 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:10:00 crc kubenswrapper[4835]: E0129 16:10:00.493683 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:10:01 crc kubenswrapper[4835]: E0129 16:10:01.630872 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:10:01 crc kubenswrapper[4835]: E0129 16:10:01.631386 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdgzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rfz8s_openshift-marketplace(a9da2a17-9c18-4043-a3cc-f765c53af31c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:01 crc kubenswrapper[4835]: E0129 16:10:01.632688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:10:03 crc kubenswrapper[4835]: E0129 16:10:03.492776 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:10:06 crc kubenswrapper[4835]: E0129 16:10:06.618582 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:10:06 crc kubenswrapper[4835]: E0129 16:10:06.619060 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fx98q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h99hd_openshift-marketplace(d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:06 crc kubenswrapper[4835]: E0129 16:10:06.620191 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:10:11 crc kubenswrapper[4835]: I0129 16:10:11.492304 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:10:11 crc kubenswrapper[4835]: E0129 16:10:11.493293 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:10:14 crc kubenswrapper[4835]: E0129 16:10:14.626145 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:10:14 crc kubenswrapper[4835]: E0129 16:10:14.626322 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9rq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rldcp_openshift-marketplace(9433bafa-f639-49ca-8630-eb978aa94267): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:14 crc kubenswrapper[4835]: E0129 16:10:14.627550 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:10:15 crc kubenswrapper[4835]: E0129 16:10:15.493750 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:10:20 crc kubenswrapper[4835]: E0129 16:10:20.495340 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:10:22 crc kubenswrapper[4835]: I0129 16:10:22.492160 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:10:22 crc kubenswrapper[4835]: E0129 16:10:22.492699 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:10:27 crc kubenswrapper[4835]: E0129 16:10:27.628055 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:10:27 crc kubenswrapper[4835]: E0129 16:10:27.629645 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdgzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rfz8s_openshift-marketplace(a9da2a17-9c18-4043-a3cc-f765c53af31c): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:10:27 crc kubenswrapper[4835]: E0129 16:10:27.631023 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:10:29 crc kubenswrapper[4835]: E0129 16:10:29.495089 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:10:35 crc kubenswrapper[4835]: E0129 16:10:35.497893 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:10:36 crc kubenswrapper[4835]: I0129 16:10:36.492026 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:10:36 crc kubenswrapper[4835]: E0129 16:10:36.492373 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:10:41 crc kubenswrapper[4835]: E0129 16:10:41.494739 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:10:42 crc kubenswrapper[4835]: E0129 16:10:42.492912 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:10:46 crc kubenswrapper[4835]: E0129 16:10:46.493994 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:10:49 crc kubenswrapper[4835]: I0129 16:10:49.491613 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:10:49 crc kubenswrapper[4835]: E0129 16:10:49.492088 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:10:54 crc kubenswrapper[4835]: E0129 16:10:54.494492 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:10:55 crc kubenswrapper[4835]: E0129 16:10:55.496051 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:10:57 crc kubenswrapper[4835]: E0129 16:10:57.494310 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:11:01 crc kubenswrapper[4835]: I0129 16:11:01.492353 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:11:01 crc kubenswrapper[4835]: E0129 16:11:01.493028 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:11:06 crc kubenswrapper[4835]: E0129 16:11:06.494449 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" Jan 29 16:11:09 crc kubenswrapper[4835]: E0129 16:11:09.494032 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:11:12 crc kubenswrapper[4835]: E0129 16:11:12.496578 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:11:16 crc kubenswrapper[4835]: I0129 16:11:16.491864 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:11:16 crc kubenswrapper[4835]: E0129 16:11:16.493111 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:11:18 crc kubenswrapper[4835]: I0129 16:11:18.409100 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerStarted","Data":"d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6"} Jan 29 16:11:19 crc kubenswrapper[4835]: I0129 16:11:19.417321 4835 generic.go:334] "Generic (PLEG): container finished" podID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerID="d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6" exitCode=0 Jan 29 16:11:19 crc kubenswrapper[4835]: I0129 16:11:19.417372 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerDied","Data":"d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6"} Jan 29 16:11:20 crc kubenswrapper[4835]: I0129 16:11:20.426132 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerStarted","Data":"328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761"} Jan 29 16:11:20 crc kubenswrapper[4835]: I0129 16:11:20.444681 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rfz8s" podStartSLOduration=2.455579068 podStartE2EDuration="1m35.444661232s" podCreationTimestamp="2026-01-29 16:09:45 +0000 UTC" firstStartedPulling="2026-01-29 16:09:46.804113986 +0000 UTC m=+2488.997157640" lastFinishedPulling="2026-01-29 16:11:19.79319615 +0000 UTC m=+2581.986239804" observedRunningTime="2026-01-29 16:11:20.441255465 +0000 UTC m=+2582.634299129" watchObservedRunningTime="2026-01-29 16:11:20.444661232 +0000 UTC m=+2582.637704886" Jan 29 16:11:24 crc kubenswrapper[4835]: E0129 16:11:24.494055 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" Jan 29 16:11:26 crc kubenswrapper[4835]: I0129 16:11:26.046561 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:11:26 crc kubenswrapper[4835]: I0129 16:11:26.046928 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:11:26 crc kubenswrapper[4835]: I0129 16:11:26.090370 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:11:26 crc kubenswrapper[4835]: E0129 16:11:26.494726 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" Jan 29 16:11:26 crc kubenswrapper[4835]: I0129 16:11:26.581388 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:11:26 crc kubenswrapper[4835]: I0129 16:11:26.628607 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfz8s"] Jan 29 16:11:27 crc kubenswrapper[4835]: I0129 16:11:27.492438 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:11:28 crc kubenswrapper[4835]: I0129 16:11:28.555543 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"e0293aa5b935e185984d25e15f5cf1c4fbe29498ddc7d13077908c09961e92c0"} Jan 29 16:11:28 crc kubenswrapper[4835]: I0129 16:11:28.555696 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rfz8s" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="registry-server" containerID="cri-o://328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761" gracePeriod=2 Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.008807 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.100491 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-catalog-content\") pod \"a9da2a17-9c18-4043-a3cc-f765c53af31c\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.100628 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-utilities\") pod \"a9da2a17-9c18-4043-a3cc-f765c53af31c\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.100731 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdgzt\" (UniqueName: \"kubernetes.io/projected/a9da2a17-9c18-4043-a3cc-f765c53af31c-kube-api-access-qdgzt\") pod \"a9da2a17-9c18-4043-a3cc-f765c53af31c\" (UID: \"a9da2a17-9c18-4043-a3cc-f765c53af31c\") " Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.101690 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-utilities" (OuterVolumeSpecName: "utilities") pod "a9da2a17-9c18-4043-a3cc-f765c53af31c" (UID: "a9da2a17-9c18-4043-a3cc-f765c53af31c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.102136 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.107775 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9da2a17-9c18-4043-a3cc-f765c53af31c-kube-api-access-qdgzt" (OuterVolumeSpecName: "kube-api-access-qdgzt") pod "a9da2a17-9c18-4043-a3cc-f765c53af31c" (UID: "a9da2a17-9c18-4043-a3cc-f765c53af31c"). InnerVolumeSpecName "kube-api-access-qdgzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.127057 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9da2a17-9c18-4043-a3cc-f765c53af31c" (UID: "a9da2a17-9c18-4043-a3cc-f765c53af31c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.203789 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdgzt\" (UniqueName: \"kubernetes.io/projected/a9da2a17-9c18-4043-a3cc-f765c53af31c-kube-api-access-qdgzt\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.203841 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9da2a17-9c18-4043-a3cc-f765c53af31c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.565056 4835 generic.go:334] "Generic (PLEG): container finished" podID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerID="328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761" exitCode=0 Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.565134 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rfz8s" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.565126 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerDied","Data":"328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761"} Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.565443 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rfz8s" event={"ID":"a9da2a17-9c18-4043-a3cc-f765c53af31c","Type":"ContainerDied","Data":"8266592e89b0d8cc9aa0a8a3cc0e52eb488ae70dc079fe39565c0a6064c8596e"} Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.565489 4835 scope.go:117] "RemoveContainer" containerID="328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.581406 4835 scope.go:117] "RemoveContainer" containerID="d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.595460 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfz8s"] Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.615661 4835 scope.go:117] "RemoveContainer" containerID="a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.627664 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rfz8s"] Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.644553 4835 scope.go:117] "RemoveContainer" containerID="328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761" Jan 29 16:11:29 crc kubenswrapper[4835]: E0129 16:11:29.645093 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761\": container with ID starting with 328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761 not found: ID does not exist" containerID="328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.645155 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761"} err="failed to get container status \"328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761\": rpc error: code = NotFound desc = could not find container \"328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761\": container with ID starting with 328fa69cc62d1a6457b53fd832f199e6007e13c6be2005890c07090a1b8df761 not found: ID does not exist" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.645176 4835 scope.go:117] "RemoveContainer" containerID="d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6" Jan 29 16:11:29 crc kubenswrapper[4835]: E0129 16:11:29.645578 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6\": container with ID starting with d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6 not found: ID does not exist" containerID="d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.645712 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6"} err="failed to get container status \"d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6\": rpc error: code = NotFound desc = could not find container \"d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6\": container with ID starting with d044e9c5cabf3b274d1b294291b196cad5d730f23875c0093742d91669c092f6 not found: ID does not exist" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.645815 4835 scope.go:117] "RemoveContainer" containerID="a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5" Jan 29 16:11:29 crc kubenswrapper[4835]: E0129 16:11:29.646220 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5\": container with ID starting with a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5 not found: ID does not exist" containerID="a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5" Jan 29 16:11:29 crc kubenswrapper[4835]: I0129 16:11:29.646333 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5"} err="failed to get container status \"a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5\": rpc error: code = NotFound desc = could not find container \"a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5\": container with ID starting with a55028db4e0615266dd782c4774fae7cacb3cbdffe5ad9000146daeabc2cdfe5 not found: ID does not exist" Jan 29 16:11:30 crc kubenswrapper[4835]: I0129 16:11:30.502527 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" path="/var/lib/kubelet/pods/a9da2a17-9c18-4043-a3cc-f765c53af31c/volumes" Jan 29 16:11:39 crc kubenswrapper[4835]: I0129 16:11:39.635820 4835 generic.go:334] "Generic (PLEG): container finished" podID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerID="33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12" exitCode=0 Jan 29 16:11:39 crc kubenswrapper[4835]: I0129 16:11:39.635867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h99hd" event={"ID":"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67","Type":"ContainerDied","Data":"33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12"} Jan 29 16:11:39 crc kubenswrapper[4835]: I0129 16:11:39.638805 4835 generic.go:334] "Generic (PLEG): container finished" podID="9433bafa-f639-49ca-8630-eb978aa94267" containerID="99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94" exitCode=0 Jan 29 16:11:39 crc kubenswrapper[4835]: I0129 16:11:39.638840 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rldcp" event={"ID":"9433bafa-f639-49ca-8630-eb978aa94267","Type":"ContainerDied","Data":"99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94"} Jan 29 16:11:40 crc kubenswrapper[4835]: I0129 16:11:40.647308 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h99hd" event={"ID":"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67","Type":"ContainerStarted","Data":"3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64"} Jan 29 16:11:40 crc kubenswrapper[4835]: I0129 16:11:40.650065 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rldcp" event={"ID":"9433bafa-f639-49ca-8630-eb978aa94267","Type":"ContainerStarted","Data":"2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f"} Jan 29 16:11:40 crc kubenswrapper[4835]: I0129 16:11:40.666929 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h99hd" podStartSLOduration=2.975225031 podStartE2EDuration="2m53.666910301s" podCreationTimestamp="2026-01-29 16:08:47 +0000 UTC" firstStartedPulling="2026-01-29 16:08:49.341842799 +0000 UTC m=+2431.534886493" lastFinishedPulling="2026-01-29 16:11:40.033528109 +0000 UTC m=+2602.226571763" observedRunningTime="2026-01-29 16:11:40.664209342 +0000 UTC m=+2602.857253006" watchObservedRunningTime="2026-01-29 16:11:40.666910301 +0000 UTC m=+2602.859953975" Jan 29 16:11:40 crc kubenswrapper[4835]: I0129 16:11:40.683094 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rldcp" podStartSLOduration=2.742068079 podStartE2EDuration="2m55.683077661s" podCreationTimestamp="2026-01-29 16:08:45 +0000 UTC" firstStartedPulling="2026-01-29 16:08:47.324435368 +0000 UTC m=+2429.517479022" lastFinishedPulling="2026-01-29 16:11:40.26544496 +0000 UTC m=+2602.458488604" observedRunningTime="2026-01-29 16:11:40.67948002 +0000 UTC m=+2602.872523674" watchObservedRunningTime="2026-01-29 16:11:40.683077661 +0000 UTC m=+2602.876121315" Jan 29 16:11:46 crc kubenswrapper[4835]: I0129 16:11:46.222130 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:11:46 crc kubenswrapper[4835]: I0129 16:11:46.222662 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:11:46 crc kubenswrapper[4835]: I0129 16:11:46.264410 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:11:46 crc kubenswrapper[4835]: I0129 16:11:46.739961 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:11:46 crc kubenswrapper[4835]: I0129 16:11:46.952708 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rldcp"] Jan 29 16:11:47 crc kubenswrapper[4835]: I0129 16:11:47.997203 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:11:47 crc kubenswrapper[4835]: I0129 16:11:47.997266 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:11:48 crc kubenswrapper[4835]: I0129 16:11:48.037421 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:11:48 crc kubenswrapper[4835]: I0129 16:11:48.704367 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rldcp" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="registry-server" containerID="cri-o://2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f" gracePeriod=2 Jan 29 16:11:48 crc kubenswrapper[4835]: I0129 16:11:48.749141 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.030010 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.194596 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-utilities\") pod \"9433bafa-f639-49ca-8630-eb978aa94267\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.194694 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9rq7\" (UniqueName: \"kubernetes.io/projected/9433bafa-f639-49ca-8630-eb978aa94267-kube-api-access-t9rq7\") pod \"9433bafa-f639-49ca-8630-eb978aa94267\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.195575 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-utilities" (OuterVolumeSpecName: "utilities") pod "9433bafa-f639-49ca-8630-eb978aa94267" (UID: "9433bafa-f639-49ca-8630-eb978aa94267"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.195650 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-catalog-content\") pod \"9433bafa-f639-49ca-8630-eb978aa94267\" (UID: \"9433bafa-f639-49ca-8630-eb978aa94267\") " Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.196303 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.200295 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9433bafa-f639-49ca-8630-eb978aa94267-kube-api-access-t9rq7" (OuterVolumeSpecName: "kube-api-access-t9rq7") pod "9433bafa-f639-49ca-8630-eb978aa94267" (UID: "9433bafa-f639-49ca-8630-eb978aa94267"). InnerVolumeSpecName "kube-api-access-t9rq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.246967 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9433bafa-f639-49ca-8630-eb978aa94267" (UID: "9433bafa-f639-49ca-8630-eb978aa94267"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.298059 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9433bafa-f639-49ca-8630-eb978aa94267-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.298103 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9rq7\" (UniqueName: \"kubernetes.io/projected/9433bafa-f639-49ca-8630-eb978aa94267-kube-api-access-t9rq7\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.712814 4835 generic.go:334] "Generic (PLEG): container finished" podID="9433bafa-f639-49ca-8630-eb978aa94267" containerID="2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f" exitCode=0 Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.712901 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rldcp" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.712934 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rldcp" event={"ID":"9433bafa-f639-49ca-8630-eb978aa94267","Type":"ContainerDied","Data":"2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f"} Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.712968 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rldcp" event={"ID":"9433bafa-f639-49ca-8630-eb978aa94267","Type":"ContainerDied","Data":"89106e3eec310e877df9a8560653801c0ee0eafb5642886212b6647c93f8aca1"} Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.712989 4835 scope.go:117] "RemoveContainer" containerID="2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.728251 4835 scope.go:117] "RemoveContainer" containerID="99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.745700 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rldcp"] Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.751584 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rldcp"] Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.754352 4835 scope.go:117] "RemoveContainer" containerID="82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.770218 4835 scope.go:117] "RemoveContainer" containerID="2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f" Jan 29 16:11:49 crc kubenswrapper[4835]: E0129 16:11:49.770653 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f\": container with ID starting with 2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f not found: ID does not exist" containerID="2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.770692 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f"} err="failed to get container status \"2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f\": rpc error: code = NotFound desc = could not find container \"2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f\": container with ID starting with 2b4296d78372abe21cd37639ecf32ca92144a1958c7d2869d924b8cbf541be9f not found: ID does not exist" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.770719 4835 scope.go:117] "RemoveContainer" containerID="99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94" Jan 29 16:11:49 crc kubenswrapper[4835]: E0129 16:11:49.771021 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94\": container with ID starting with 99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94 not found: ID does not exist" containerID="99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.771046 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94"} err="failed to get container status \"99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94\": rpc error: code = NotFound desc = could not find container \"99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94\": container with ID starting with 99614bed461ea706f1825e3913fea83e019e4e26d735c59dcfe4235a09336c94 not found: ID does not exist" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.771066 4835 scope.go:117] "RemoveContainer" containerID="82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b" Jan 29 16:11:49 crc kubenswrapper[4835]: E0129 16:11:49.771349 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b\": container with ID starting with 82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b not found: ID does not exist" containerID="82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b" Jan 29 16:11:49 crc kubenswrapper[4835]: I0129 16:11:49.771377 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b"} err="failed to get container status \"82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b\": rpc error: code = NotFound desc = could not find container \"82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b\": container with ID starting with 82ae43b3821ce458365c814d6f42bb2b98a8c8647963cb645f44bca3d928c60b not found: ID does not exist" Jan 29 16:11:50 crc kubenswrapper[4835]: I0129 16:11:50.353065 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h99hd"] Jan 29 16:11:50 crc kubenswrapper[4835]: I0129 16:11:50.501666 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9433bafa-f639-49ca-8630-eb978aa94267" path="/var/lib/kubelet/pods/9433bafa-f639-49ca-8630-eb978aa94267/volumes" Jan 29 16:11:50 crc kubenswrapper[4835]: I0129 16:11:50.723151 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h99hd" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="registry-server" containerID="cri-o://3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64" gracePeriod=2 Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.267730 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.428452 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-utilities\") pod \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.428543 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx98q\" (UniqueName: \"kubernetes.io/projected/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-kube-api-access-fx98q\") pod \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.428669 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-catalog-content\") pod \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\" (UID: \"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67\") " Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.429310 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-utilities" (OuterVolumeSpecName: "utilities") pod "d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" (UID: "d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.435709 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-kube-api-access-fx98q" (OuterVolumeSpecName: "kube-api-access-fx98q") pod "d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" (UID: "d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67"). InnerVolumeSpecName "kube-api-access-fx98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.478927 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" (UID: "d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.530415 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.530457 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.530527 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx98q\" (UniqueName: \"kubernetes.io/projected/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67-kube-api-access-fx98q\") on node \"crc\" DevicePath \"\"" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.731204 4835 generic.go:334] "Generic (PLEG): container finished" podID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerID="3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64" exitCode=0 Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.731291 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h99hd" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.731272 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h99hd" event={"ID":"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67","Type":"ContainerDied","Data":"3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64"} Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.731633 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h99hd" event={"ID":"d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67","Type":"ContainerDied","Data":"bcf32902b860934c7826811d58925cf161571acde0b729c34cf73ec605327453"} Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.731652 4835 scope.go:117] "RemoveContainer" containerID="3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.748300 4835 scope.go:117] "RemoveContainer" containerID="33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.765646 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h99hd"] Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.771700 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h99hd"] Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.780842 4835 scope.go:117] "RemoveContainer" containerID="efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.798707 4835 scope.go:117] "RemoveContainer" containerID="3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64" Jan 29 16:11:51 crc kubenswrapper[4835]: E0129 16:11:51.799217 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64\": container with ID starting with 3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64 not found: ID does not exist" containerID="3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.799261 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64"} err="failed to get container status \"3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64\": rpc error: code = NotFound desc = could not find container \"3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64\": container with ID starting with 3a9c0c0f0f4d7f9660086279f1d5b1307ba303c936186a4a5ed8f6ba3af02b64 not found: ID does not exist" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.799288 4835 scope.go:117] "RemoveContainer" containerID="33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12" Jan 29 16:11:51 crc kubenswrapper[4835]: E0129 16:11:51.799735 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12\": container with ID starting with 33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12 not found: ID does not exist" containerID="33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.799755 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12"} err="failed to get container status \"33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12\": rpc error: code = NotFound desc = could not find container \"33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12\": container with ID starting with 33e1d1396ce882028abad88fa279d68f2d39735564fa2f4ab9f35daaea468e12 not found: ID does not exist" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.799769 4835 scope.go:117] "RemoveContainer" containerID="efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b" Jan 29 16:11:51 crc kubenswrapper[4835]: E0129 16:11:51.800086 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b\": container with ID starting with efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b not found: ID does not exist" containerID="efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b" Jan 29 16:11:51 crc kubenswrapper[4835]: I0129 16:11:51.800107 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b"} err="failed to get container status \"efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b\": rpc error: code = NotFound desc = could not find container \"efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b\": container with ID starting with efa2813ca9cad67bcbfcc60a524e623e0f1c382147e2546e883a8000f328131b not found: ID does not exist" Jan 29 16:11:52 crc kubenswrapper[4835]: I0129 16:11:52.500693 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" path="/var/lib/kubelet/pods/d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67/volumes" Jan 29 16:13:51 crc kubenswrapper[4835]: I0129 16:13:51.615560 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:13:51 crc kubenswrapper[4835]: I0129 16:13:51.616138 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.141577 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q6prk"] Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142370 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="extract-utilities" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142383 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="extract-utilities" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142396 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="extract-utilities" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142402 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="extract-utilities" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142413 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142420 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142429 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="extract-utilities" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142435 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="extract-utilities" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142445 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142451 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142479 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142485 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142498 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="extract-content" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142503 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="extract-content" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142512 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="extract-content" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142517 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="extract-content" Jan 29 16:14:02 crc kubenswrapper[4835]: E0129 16:14:02.142535 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="extract-content" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142541 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="extract-content" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142657 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9da2a17-9c18-4043-a3cc-f765c53af31c" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142673 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="9433bafa-f639-49ca-8630-eb978aa94267" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.142682 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82c5f9f-4d4f-4970-bcf4-3bd918ea2c67" containerName="registry-server" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.145327 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.162651 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6prk"] Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.329543 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv6n8\" (UniqueName: \"kubernetes.io/projected/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-kube-api-access-cv6n8\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.329608 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-catalog-content\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.329654 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-utilities\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.430689 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv6n8\" (UniqueName: \"kubernetes.io/projected/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-kube-api-access-cv6n8\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.430744 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-catalog-content\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.430822 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-utilities\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.431354 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-catalog-content\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.431360 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-utilities\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.455319 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv6n8\" (UniqueName: \"kubernetes.io/projected/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-kube-api-access-cv6n8\") pod \"redhat-operators-q6prk\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.466134 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:02 crc kubenswrapper[4835]: I0129 16:14:02.909333 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q6prk"] Jan 29 16:14:03 crc kubenswrapper[4835]: I0129 16:14:03.723698 4835 generic.go:334] "Generic (PLEG): container finished" podID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerID="76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd" exitCode=0 Jan 29 16:14:03 crc kubenswrapper[4835]: I0129 16:14:03.723775 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerDied","Data":"76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd"} Jan 29 16:14:03 crc kubenswrapper[4835]: I0129 16:14:03.724181 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerStarted","Data":"afdf5e6a698872cccca5937ce413a9f0a8862c912de48e6395186c84a079bc0d"} Jan 29 16:14:03 crc kubenswrapper[4835]: I0129 16:14:03.726104 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:14:07 crc kubenswrapper[4835]: I0129 16:14:07.749324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerStarted","Data":"ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8"} Jan 29 16:14:08 crc kubenswrapper[4835]: I0129 16:14:08.756883 4835 generic.go:334] "Generic (PLEG): container finished" podID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerID="ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8" exitCode=0 Jan 29 16:14:08 crc kubenswrapper[4835]: I0129 16:14:08.756968 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerDied","Data":"ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8"} Jan 29 16:14:11 crc kubenswrapper[4835]: I0129 16:14:11.778391 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerStarted","Data":"129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72"} Jan 29 16:14:11 crc kubenswrapper[4835]: I0129 16:14:11.795490 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q6prk" podStartSLOduration=2.485260801 podStartE2EDuration="9.795449917s" podCreationTimestamp="2026-01-29 16:14:02 +0000 UTC" firstStartedPulling="2026-01-29 16:14:03.725782154 +0000 UTC m=+2745.918825808" lastFinishedPulling="2026-01-29 16:14:11.03597127 +0000 UTC m=+2753.229014924" observedRunningTime="2026-01-29 16:14:11.79280273 +0000 UTC m=+2753.985846384" watchObservedRunningTime="2026-01-29 16:14:11.795449917 +0000 UTC m=+2753.988493571" Jan 29 16:14:12 crc kubenswrapper[4835]: I0129 16:14:12.466880 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:12 crc kubenswrapper[4835]: I0129 16:14:12.467229 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:13 crc kubenswrapper[4835]: I0129 16:14:13.511037 4835 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q6prk" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="registry-server" probeResult="failure" output=< Jan 29 16:14:13 crc kubenswrapper[4835]: timeout: failed to connect service ":50051" within 1s Jan 29 16:14:13 crc kubenswrapper[4835]: > Jan 29 16:14:21 crc kubenswrapper[4835]: I0129 16:14:21.614385 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:14:21 crc kubenswrapper[4835]: I0129 16:14:21.614985 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:14:22 crc kubenswrapper[4835]: I0129 16:14:22.513079 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:22 crc kubenswrapper[4835]: I0129 16:14:22.562021 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:22 crc kubenswrapper[4835]: I0129 16:14:22.742856 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q6prk"] Jan 29 16:14:23 crc kubenswrapper[4835]: I0129 16:14:23.851435 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q6prk" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="registry-server" containerID="cri-o://129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72" gracePeriod=2 Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.244911 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.404146 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv6n8\" (UniqueName: \"kubernetes.io/projected/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-kube-api-access-cv6n8\") pod \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.404230 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-utilities\") pod \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.404366 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-catalog-content\") pod \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\" (UID: \"ffeb34bd-819a-41e7-a07a-39a14b1e2e13\") " Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.405768 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-utilities" (OuterVolumeSpecName: "utilities") pod "ffeb34bd-819a-41e7-a07a-39a14b1e2e13" (UID: "ffeb34bd-819a-41e7-a07a-39a14b1e2e13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.412081 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-kube-api-access-cv6n8" (OuterVolumeSpecName: "kube-api-access-cv6n8") pod "ffeb34bd-819a-41e7-a07a-39a14b1e2e13" (UID: "ffeb34bd-819a-41e7-a07a-39a14b1e2e13"). InnerVolumeSpecName "kube-api-access-cv6n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.508105 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.508145 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv6n8\" (UniqueName: \"kubernetes.io/projected/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-kube-api-access-cv6n8\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.540906 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffeb34bd-819a-41e7-a07a-39a14b1e2e13" (UID: "ffeb34bd-819a-41e7-a07a-39a14b1e2e13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.610098 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffeb34bd-819a-41e7-a07a-39a14b1e2e13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.859998 4835 generic.go:334] "Generic (PLEG): container finished" podID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerID="129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72" exitCode=0 Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.860057 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerDied","Data":"129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72"} Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.860091 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q6prk" event={"ID":"ffeb34bd-819a-41e7-a07a-39a14b1e2e13","Type":"ContainerDied","Data":"afdf5e6a698872cccca5937ce413a9f0a8862c912de48e6395186c84a079bc0d"} Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.860120 4835 scope.go:117] "RemoveContainer" containerID="129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.860278 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q6prk" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.890405 4835 scope.go:117] "RemoveContainer" containerID="ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.894074 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q6prk"] Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.901713 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q6prk"] Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.909439 4835 scope.go:117] "RemoveContainer" containerID="76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.932977 4835 scope.go:117] "RemoveContainer" containerID="129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72" Jan 29 16:14:24 crc kubenswrapper[4835]: E0129 16:14:24.933487 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72\": container with ID starting with 129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72 not found: ID does not exist" containerID="129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.933518 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72"} err="failed to get container status \"129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72\": rpc error: code = NotFound desc = could not find container \"129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72\": container with ID starting with 129c2f6deb78440797ceecc40224701c5a0c128f1ad6b04d9fc19b1a98cb5e72 not found: ID does not exist" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.933538 4835 scope.go:117] "RemoveContainer" containerID="ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8" Jan 29 16:14:24 crc kubenswrapper[4835]: E0129 16:14:24.933770 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8\": container with ID starting with ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8 not found: ID does not exist" containerID="ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.933794 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8"} err="failed to get container status \"ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8\": rpc error: code = NotFound desc = could not find container \"ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8\": container with ID starting with ae1644bcf298ca68b1c9c7359fb385dac9c59043ee7337ea616f88a9932dd0a8 not found: ID does not exist" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.933814 4835 scope.go:117] "RemoveContainer" containerID="76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd" Jan 29 16:14:24 crc kubenswrapper[4835]: E0129 16:14:24.934159 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd\": container with ID starting with 76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd not found: ID does not exist" containerID="76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd" Jan 29 16:14:24 crc kubenswrapper[4835]: I0129 16:14:24.934204 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd"} err="failed to get container status \"76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd\": rpc error: code = NotFound desc = could not find container \"76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd\": container with ID starting with 76eedad6b8cf08c96dcc93a3ab549d3deef13fd3b6bc9601820d85e8d48eaedd not found: ID does not exist" Jan 29 16:14:26 crc kubenswrapper[4835]: I0129 16:14:26.501004 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" path="/var/lib/kubelet/pods/ffeb34bd-819a-41e7-a07a-39a14b1e2e13/volumes" Jan 29 16:14:51 crc kubenswrapper[4835]: I0129 16:14:51.614329 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:14:51 crc kubenswrapper[4835]: I0129 16:14:51.614920 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:14:51 crc kubenswrapper[4835]: I0129 16:14:51.614962 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 16:14:51 crc kubenswrapper[4835]: I0129 16:14:51.615532 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0293aa5b935e185984d25e15f5cf1c4fbe29498ddc7d13077908c09961e92c0"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:14:51 crc kubenswrapper[4835]: I0129 16:14:51.615602 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://e0293aa5b935e185984d25e15f5cf1c4fbe29498ddc7d13077908c09961e92c0" gracePeriod=600 Jan 29 16:14:52 crc kubenswrapper[4835]: I0129 16:14:52.035921 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="e0293aa5b935e185984d25e15f5cf1c4fbe29498ddc7d13077908c09961e92c0" exitCode=0 Jan 29 16:14:52 crc kubenswrapper[4835]: I0129 16:14:52.035963 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"e0293aa5b935e185984d25e15f5cf1c4fbe29498ddc7d13077908c09961e92c0"} Jan 29 16:14:52 crc kubenswrapper[4835]: I0129 16:14:52.036035 4835 scope.go:117] "RemoveContainer" containerID="a086a6f484fc0b0f1f3177dc5952c91fd12ce0f8bc4ed3b508f822394f5f68e6" Jan 29 16:14:53 crc kubenswrapper[4835]: I0129 16:14:53.048765 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c"} Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.139308 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm"] Jan 29 16:15:00 crc kubenswrapper[4835]: E0129 16:15:00.141353 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="registry-server" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.141491 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="registry-server" Jan 29 16:15:00 crc kubenswrapper[4835]: E0129 16:15:00.141624 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="extract-content" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.141704 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="extract-content" Jan 29 16:15:00 crc kubenswrapper[4835]: E0129 16:15:00.141807 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="extract-utilities" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.141981 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="extract-utilities" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.142399 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffeb34bd-819a-41e7-a07a-39a14b1e2e13" containerName="registry-server" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.143112 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.145458 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.145833 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.148461 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm"] Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.298870 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0d6981d-a487-46ab-8895-3fe74d7fa191-secret-volume\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.298964 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75mlw\" (UniqueName: \"kubernetes.io/projected/f0d6981d-a487-46ab-8895-3fe74d7fa191-kube-api-access-75mlw\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.299045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0d6981d-a487-46ab-8895-3fe74d7fa191-config-volume\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.400336 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75mlw\" (UniqueName: \"kubernetes.io/projected/f0d6981d-a487-46ab-8895-3fe74d7fa191-kube-api-access-75mlw\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.400406 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0d6981d-a487-46ab-8895-3fe74d7fa191-config-volume\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.400497 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0d6981d-a487-46ab-8895-3fe74d7fa191-secret-volume\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.401365 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0d6981d-a487-46ab-8895-3fe74d7fa191-config-volume\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.407381 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0d6981d-a487-46ab-8895-3fe74d7fa191-secret-volume\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.419916 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75mlw\" (UniqueName: \"kubernetes.io/projected/f0d6981d-a487-46ab-8895-3fe74d7fa191-kube-api-access-75mlw\") pod \"collect-profiles-29495055-4pwjm\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.462407 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:00 crc kubenswrapper[4835]: I0129 16:15:00.866789 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm"] Jan 29 16:15:01 crc kubenswrapper[4835]: I0129 16:15:01.106842 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" event={"ID":"f0d6981d-a487-46ab-8895-3fe74d7fa191","Type":"ContainerStarted","Data":"d6ad5aeb0fd0842fe32b3b4a84f735109fdb2841a3541081cc8672202fa6a169"} Jan 29 16:15:01 crc kubenswrapper[4835]: I0129 16:15:01.106889 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" event={"ID":"f0d6981d-a487-46ab-8895-3fe74d7fa191","Type":"ContainerStarted","Data":"d5d2869d19f8a265072fd2205a6e9654bcc7974964a62e00b667657497fc61d3"} Jan 29 16:15:01 crc kubenswrapper[4835]: I0129 16:15:01.124378 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" podStartSLOduration=1.124361865 podStartE2EDuration="1.124361865s" podCreationTimestamp="2026-01-29 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 16:15:01.122409098 +0000 UTC m=+2803.315452752" watchObservedRunningTime="2026-01-29 16:15:01.124361865 +0000 UTC m=+2803.317405509" Jan 29 16:15:02 crc kubenswrapper[4835]: I0129 16:15:02.114780 4835 generic.go:334] "Generic (PLEG): container finished" podID="f0d6981d-a487-46ab-8895-3fe74d7fa191" containerID="d6ad5aeb0fd0842fe32b3b4a84f735109fdb2841a3541081cc8672202fa6a169" exitCode=0 Jan 29 16:15:02 crc kubenswrapper[4835]: I0129 16:15:02.114863 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" event={"ID":"f0d6981d-a487-46ab-8895-3fe74d7fa191","Type":"ContainerDied","Data":"d6ad5aeb0fd0842fe32b3b4a84f735109fdb2841a3541081cc8672202fa6a169"} Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.385004 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.447789 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0d6981d-a487-46ab-8895-3fe74d7fa191-config-volume\") pod \"f0d6981d-a487-46ab-8895-3fe74d7fa191\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.448283 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75mlw\" (UniqueName: \"kubernetes.io/projected/f0d6981d-a487-46ab-8895-3fe74d7fa191-kube-api-access-75mlw\") pod \"f0d6981d-a487-46ab-8895-3fe74d7fa191\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.448336 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0d6981d-a487-46ab-8895-3fe74d7fa191-secret-volume\") pod \"f0d6981d-a487-46ab-8895-3fe74d7fa191\" (UID: \"f0d6981d-a487-46ab-8895-3fe74d7fa191\") " Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.448720 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d6981d-a487-46ab-8895-3fe74d7fa191-config-volume" (OuterVolumeSpecName: "config-volume") pod "f0d6981d-a487-46ab-8895-3fe74d7fa191" (UID: "f0d6981d-a487-46ab-8895-3fe74d7fa191"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.454517 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0d6981d-a487-46ab-8895-3fe74d7fa191-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f0d6981d-a487-46ab-8895-3fe74d7fa191" (UID: "f0d6981d-a487-46ab-8895-3fe74d7fa191"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.454601 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d6981d-a487-46ab-8895-3fe74d7fa191-kube-api-access-75mlw" (OuterVolumeSpecName: "kube-api-access-75mlw") pod "f0d6981d-a487-46ab-8895-3fe74d7fa191" (UID: "f0d6981d-a487-46ab-8895-3fe74d7fa191"). InnerVolumeSpecName "kube-api-access-75mlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.549917 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75mlw\" (UniqueName: \"kubernetes.io/projected/f0d6981d-a487-46ab-8895-3fe74d7fa191-kube-api-access-75mlw\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.549950 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0d6981d-a487-46ab-8895-3fe74d7fa191-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:03 crc kubenswrapper[4835]: I0129 16:15:03.549960 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0d6981d-a487-46ab-8895-3fe74d7fa191-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:15:04 crc kubenswrapper[4835]: I0129 16:15:04.136775 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" event={"ID":"f0d6981d-a487-46ab-8895-3fe74d7fa191","Type":"ContainerDied","Data":"d5d2869d19f8a265072fd2205a6e9654bcc7974964a62e00b667657497fc61d3"} Jan 29 16:15:04 crc kubenswrapper[4835]: I0129 16:15:04.136822 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d2869d19f8a265072fd2205a6e9654bcc7974964a62e00b667657497fc61d3" Jan 29 16:15:04 crc kubenswrapper[4835]: I0129 16:15:04.136891 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495055-4pwjm" Jan 29 16:15:04 crc kubenswrapper[4835]: I0129 16:15:04.457793 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g"] Jan 29 16:15:04 crc kubenswrapper[4835]: I0129 16:15:04.463682 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495010-pgn6g"] Jan 29 16:15:04 crc kubenswrapper[4835]: I0129 16:15:04.508852 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919e8487-ff15-4a8f-a5e0-dca045e62ae2" path="/var/lib/kubelet/pods/919e8487-ff15-4a8f-a5e0-dca045e62ae2/volumes" Jan 29 16:15:43 crc kubenswrapper[4835]: I0129 16:15:43.769372 4835 scope.go:117] "RemoveContainer" containerID="1d49033fac16d3274421a6493bc32706476436fdbe26e6b04d7cc6d38fe32b24" Jan 29 16:17:21 crc kubenswrapper[4835]: I0129 16:17:21.614252 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:17:21 crc kubenswrapper[4835]: I0129 16:17:21.614975 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:17:51 crc kubenswrapper[4835]: I0129 16:17:51.614648 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:17:51 crc kubenswrapper[4835]: I0129 16:17:51.615195 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:18:21 crc kubenswrapper[4835]: I0129 16:18:21.614996 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:18:21 crc kubenswrapper[4835]: I0129 16:18:21.615575 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:18:21 crc kubenswrapper[4835]: I0129 16:18:21.615626 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 16:18:21 crc kubenswrapper[4835]: I0129 16:18:21.616269 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:18:21 crc kubenswrapper[4835]: I0129 16:18:21.616328 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" gracePeriod=600 Jan 29 16:18:21 crc kubenswrapper[4835]: E0129 16:18:21.749969 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:18:22 crc kubenswrapper[4835]: I0129 16:18:22.611585 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" exitCode=0 Jan 29 16:18:22 crc kubenswrapper[4835]: I0129 16:18:22.611638 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c"} Jan 29 16:18:22 crc kubenswrapper[4835]: I0129 16:18:22.612171 4835 scope.go:117] "RemoveContainer" containerID="e0293aa5b935e185984d25e15f5cf1c4fbe29498ddc7d13077908c09961e92c0" Jan 29 16:18:22 crc kubenswrapper[4835]: I0129 16:18:22.612728 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:18:22 crc kubenswrapper[4835]: E0129 16:18:22.612963 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:18:37 crc kubenswrapper[4835]: I0129 16:18:37.492171 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:18:37 crc kubenswrapper[4835]: E0129 16:18:37.493763 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:18:51 crc kubenswrapper[4835]: I0129 16:18:51.492584 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:18:51 crc kubenswrapper[4835]: E0129 16:18:51.493369 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:19:04 crc kubenswrapper[4835]: I0129 16:19:04.492895 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:19:04 crc kubenswrapper[4835]: E0129 16:19:04.493655 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:19:18 crc kubenswrapper[4835]: I0129 16:19:18.497430 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:19:18 crc kubenswrapper[4835]: E0129 16:19:18.498221 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:19:33 crc kubenswrapper[4835]: I0129 16:19:33.491397 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:19:33 crc kubenswrapper[4835]: E0129 16:19:33.491966 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:19:45 crc kubenswrapper[4835]: I0129 16:19:45.492005 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:19:45 crc kubenswrapper[4835]: E0129 16:19:45.492797 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:19:57 crc kubenswrapper[4835]: I0129 16:19:57.493046 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:19:57 crc kubenswrapper[4835]: E0129 16:19:57.493968 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:20:10 crc kubenswrapper[4835]: I0129 16:20:10.492370 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:20:10 crc kubenswrapper[4835]: E0129 16:20:10.493083 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:20:24 crc kubenswrapper[4835]: I0129 16:20:24.492840 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:20:24 crc kubenswrapper[4835]: E0129 16:20:24.494145 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:20:36 crc kubenswrapper[4835]: I0129 16:20:36.492014 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:20:36 crc kubenswrapper[4835]: E0129 16:20:36.492924 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:20:46 crc kubenswrapper[4835]: I0129 16:20:46.852906 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98ptc"] Jan 29 16:20:46 crc kubenswrapper[4835]: E0129 16:20:46.853799 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d6981d-a487-46ab-8895-3fe74d7fa191" containerName="collect-profiles" Jan 29 16:20:46 crc kubenswrapper[4835]: I0129 16:20:46.853816 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d6981d-a487-46ab-8895-3fe74d7fa191" containerName="collect-profiles" Jan 29 16:20:46 crc kubenswrapper[4835]: I0129 16:20:46.854008 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d6981d-a487-46ab-8895-3fe74d7fa191" containerName="collect-profiles" Jan 29 16:20:46 crc kubenswrapper[4835]: I0129 16:20:46.855175 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:46 crc kubenswrapper[4835]: I0129 16:20:46.869035 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98ptc"] Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.023176 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5d86\" (UniqueName: \"kubernetes.io/projected/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-kube-api-access-w5d86\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.023229 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-utilities\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.023252 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-catalog-content\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.124440 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5d86\" (UniqueName: \"kubernetes.io/projected/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-kube-api-access-w5d86\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.124504 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-utilities\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.124542 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-catalog-content\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.125163 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-catalog-content\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.125445 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-utilities\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.155214 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5d86\" (UniqueName: \"kubernetes.io/projected/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-kube-api-access-w5d86\") pod \"redhat-marketplace-98ptc\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.173370 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.653570 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98ptc"] Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.970546 4835 generic.go:334] "Generic (PLEG): container finished" podID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerID="40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80" exitCode=0 Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.970813 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98ptc" event={"ID":"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e","Type":"ContainerDied","Data":"40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80"} Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.971217 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98ptc" event={"ID":"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e","Type":"ContainerStarted","Data":"ed12333263448cbcfefb1e46c162429acd1fead9dd4ed288e328066e0756b884"} Jan 29 16:20:47 crc kubenswrapper[4835]: I0129 16:20:47.973109 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:20:48 crc kubenswrapper[4835]: I0129 16:20:48.981665 4835 generic.go:334] "Generic (PLEG): container finished" podID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerID="f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3" exitCode=0 Jan 29 16:20:48 crc kubenswrapper[4835]: I0129 16:20:48.981825 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98ptc" event={"ID":"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e","Type":"ContainerDied","Data":"f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3"} Jan 29 16:20:49 crc kubenswrapper[4835]: I0129 16:20:49.493352 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:20:49 crc kubenswrapper[4835]: E0129 16:20:49.493580 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:20:49 crc kubenswrapper[4835]: I0129 16:20:49.991580 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98ptc" event={"ID":"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e","Type":"ContainerStarted","Data":"786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a"} Jan 29 16:20:50 crc kubenswrapper[4835]: I0129 16:20:50.020787 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98ptc" podStartSLOduration=2.6367798540000003 podStartE2EDuration="4.020756914s" podCreationTimestamp="2026-01-29 16:20:46 +0000 UTC" firstStartedPulling="2026-01-29 16:20:47.972875814 +0000 UTC m=+3150.165919468" lastFinishedPulling="2026-01-29 16:20:49.356852864 +0000 UTC m=+3151.549896528" observedRunningTime="2026-01-29 16:20:50.014631496 +0000 UTC m=+3152.207675160" watchObservedRunningTime="2026-01-29 16:20:50.020756914 +0000 UTC m=+3152.213800608" Jan 29 16:20:57 crc kubenswrapper[4835]: I0129 16:20:57.174015 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:57 crc kubenswrapper[4835]: I0129 16:20:57.174631 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:57 crc kubenswrapper[4835]: I0129 16:20:57.224750 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:58 crc kubenswrapper[4835]: I0129 16:20:58.105408 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:20:58 crc kubenswrapper[4835]: I0129 16:20:58.149905 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98ptc"] Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.064834 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98ptc" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="registry-server" containerID="cri-o://786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a" gracePeriod=2 Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.420635 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.519575 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5d86\" (UniqueName: \"kubernetes.io/projected/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-kube-api-access-w5d86\") pod \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.519618 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-catalog-content\") pod \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.519679 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-utilities\") pod \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\" (UID: \"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e\") " Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.520737 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-utilities" (OuterVolumeSpecName: "utilities") pod "3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" (UID: "3db942b2-cd45-4d1b-b2dd-d2c5137cae0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.525124 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-kube-api-access-w5d86" (OuterVolumeSpecName: "kube-api-access-w5d86") pod "3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" (UID: "3db942b2-cd45-4d1b-b2dd-d2c5137cae0e"). InnerVolumeSpecName "kube-api-access-w5d86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.544961 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" (UID: "3db942b2-cd45-4d1b-b2dd-d2c5137cae0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.620819 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.620859 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:00 crc kubenswrapper[4835]: I0129 16:21:00.620874 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5d86\" (UniqueName: \"kubernetes.io/projected/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e-kube-api-access-w5d86\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.073596 4835 generic.go:334] "Generic (PLEG): container finished" podID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerID="786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a" exitCode=0 Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.073641 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98ptc" event={"ID":"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e","Type":"ContainerDied","Data":"786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a"} Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.073667 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98ptc" event={"ID":"3db942b2-cd45-4d1b-b2dd-d2c5137cae0e","Type":"ContainerDied","Data":"ed12333263448cbcfefb1e46c162429acd1fead9dd4ed288e328066e0756b884"} Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.073685 4835 scope.go:117] "RemoveContainer" containerID="786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.073738 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98ptc" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.092614 4835 scope.go:117] "RemoveContainer" containerID="f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.106973 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98ptc"] Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.113514 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98ptc"] Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.122952 4835 scope.go:117] "RemoveContainer" containerID="40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.141233 4835 scope.go:117] "RemoveContainer" containerID="786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a" Jan 29 16:21:01 crc kubenswrapper[4835]: E0129 16:21:01.141728 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a\": container with ID starting with 786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a not found: ID does not exist" containerID="786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.141765 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a"} err="failed to get container status \"786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a\": rpc error: code = NotFound desc = could not find container \"786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a\": container with ID starting with 786a2cff4fd579bf0603f4bdf492dbe5ec1ef71e05cb4a95cd4a7542cd54b61a not found: ID does not exist" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.141787 4835 scope.go:117] "RemoveContainer" containerID="f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3" Jan 29 16:21:01 crc kubenswrapper[4835]: E0129 16:21:01.142181 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3\": container with ID starting with f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3 not found: ID does not exist" containerID="f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.142214 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3"} err="failed to get container status \"f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3\": rpc error: code = NotFound desc = could not find container \"f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3\": container with ID starting with f516aa6bdb3a9445ffb8887cacb252d1a780b05d8da048e5b22ec6566c36a2e3 not found: ID does not exist" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.142231 4835 scope.go:117] "RemoveContainer" containerID="40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80" Jan 29 16:21:01 crc kubenswrapper[4835]: E0129 16:21:01.142428 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80\": container with ID starting with 40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80 not found: ID does not exist" containerID="40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80" Jan 29 16:21:01 crc kubenswrapper[4835]: I0129 16:21:01.142458 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80"} err="failed to get container status \"40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80\": rpc error: code = NotFound desc = could not find container \"40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80\": container with ID starting with 40829f418dec8c95cc4f2a866dbe4726f51eb5a534c7c3972e1c150f14078f80 not found: ID does not exist" Jan 29 16:21:02 crc kubenswrapper[4835]: I0129 16:21:02.499913 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" path="/var/lib/kubelet/pods/3db942b2-cd45-4d1b-b2dd-d2c5137cae0e/volumes" Jan 29 16:21:03 crc kubenswrapper[4835]: I0129 16:21:03.491981 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:21:03 crc kubenswrapper[4835]: E0129 16:21:03.492194 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.136845 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g9q8q"] Jan 29 16:21:14 crc kubenswrapper[4835]: E0129 16:21:14.137741 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="extract-utilities" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.137757 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="extract-utilities" Jan 29 16:21:14 crc kubenswrapper[4835]: E0129 16:21:14.137766 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="extract-content" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.137776 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="extract-content" Jan 29 16:21:14 crc kubenswrapper[4835]: E0129 16:21:14.137811 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="registry-server" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.137820 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="registry-server" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.137976 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3db942b2-cd45-4d1b-b2dd-d2c5137cae0e" containerName="registry-server" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.139029 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.159510 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9q8q"] Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.224045 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkkm4\" (UniqueName: \"kubernetes.io/projected/8a8acb75-f634-47b3-bf63-60b98ea8c688-kube-api-access-pkkm4\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.224342 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-catalog-content\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.224483 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-utilities\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.325759 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkkm4\" (UniqueName: \"kubernetes.io/projected/8a8acb75-f634-47b3-bf63-60b98ea8c688-kube-api-access-pkkm4\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.326052 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-catalog-content\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.326158 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-utilities\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.326704 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-utilities\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.326759 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-catalog-content\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.346146 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkkm4\" (UniqueName: \"kubernetes.io/projected/8a8acb75-f634-47b3-bf63-60b98ea8c688-kube-api-access-pkkm4\") pod \"community-operators-g9q8q\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.507024 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:14 crc kubenswrapper[4835]: I0129 16:21:14.967941 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9q8q"] Jan 29 16:21:15 crc kubenswrapper[4835]: I0129 16:21:15.165287 4835 generic.go:334] "Generic (PLEG): container finished" podID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerID="bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa" exitCode=0 Jan 29 16:21:15 crc kubenswrapper[4835]: I0129 16:21:15.165338 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9q8q" event={"ID":"8a8acb75-f634-47b3-bf63-60b98ea8c688","Type":"ContainerDied","Data":"bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa"} Jan 29 16:21:15 crc kubenswrapper[4835]: I0129 16:21:15.165623 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9q8q" event={"ID":"8a8acb75-f634-47b3-bf63-60b98ea8c688","Type":"ContainerStarted","Data":"b6535da73aa6760ef678bfacf488a83fcb091acee04db4f01b55f7185bc14b54"} Jan 29 16:21:16 crc kubenswrapper[4835]: I0129 16:21:16.173529 4835 generic.go:334] "Generic (PLEG): container finished" podID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerID="014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7" exitCode=0 Jan 29 16:21:16 crc kubenswrapper[4835]: I0129 16:21:16.173573 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9q8q" event={"ID":"8a8acb75-f634-47b3-bf63-60b98ea8c688","Type":"ContainerDied","Data":"014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7"} Jan 29 16:21:17 crc kubenswrapper[4835]: I0129 16:21:17.184996 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9q8q" event={"ID":"8a8acb75-f634-47b3-bf63-60b98ea8c688","Type":"ContainerStarted","Data":"ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf"} Jan 29 16:21:17 crc kubenswrapper[4835]: I0129 16:21:17.209605 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g9q8q" podStartSLOduration=1.790654813 podStartE2EDuration="3.209585374s" podCreationTimestamp="2026-01-29 16:21:14 +0000 UTC" firstStartedPulling="2026-01-29 16:21:15.166677475 +0000 UTC m=+3177.359721129" lastFinishedPulling="2026-01-29 16:21:16.585608036 +0000 UTC m=+3178.778651690" observedRunningTime="2026-01-29 16:21:17.203893607 +0000 UTC m=+3179.396937261" watchObservedRunningTime="2026-01-29 16:21:17.209585374 +0000 UTC m=+3179.402629028" Jan 29 16:21:18 crc kubenswrapper[4835]: I0129 16:21:18.496429 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:21:18 crc kubenswrapper[4835]: E0129 16:21:18.497031 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:21:24 crc kubenswrapper[4835]: I0129 16:21:24.507997 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:24 crc kubenswrapper[4835]: I0129 16:21:24.508307 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:24 crc kubenswrapper[4835]: I0129 16:21:24.559258 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:25 crc kubenswrapper[4835]: I0129 16:21:25.271669 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:25 crc kubenswrapper[4835]: I0129 16:21:25.315619 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9q8q"] Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.255249 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g9q8q" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="registry-server" containerID="cri-o://ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf" gracePeriod=2 Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.621888 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.812966 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-utilities\") pod \"8a8acb75-f634-47b3-bf63-60b98ea8c688\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.813019 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-catalog-content\") pod \"8a8acb75-f634-47b3-bf63-60b98ea8c688\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.813170 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkkm4\" (UniqueName: \"kubernetes.io/projected/8a8acb75-f634-47b3-bf63-60b98ea8c688-kube-api-access-pkkm4\") pod \"8a8acb75-f634-47b3-bf63-60b98ea8c688\" (UID: \"8a8acb75-f634-47b3-bf63-60b98ea8c688\") " Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.813764 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-utilities" (OuterVolumeSpecName: "utilities") pod "8a8acb75-f634-47b3-bf63-60b98ea8c688" (UID: "8a8acb75-f634-47b3-bf63-60b98ea8c688"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.820743 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8acb75-f634-47b3-bf63-60b98ea8c688-kube-api-access-pkkm4" (OuterVolumeSpecName: "kube-api-access-pkkm4") pod "8a8acb75-f634-47b3-bf63-60b98ea8c688" (UID: "8a8acb75-f634-47b3-bf63-60b98ea8c688"). InnerVolumeSpecName "kube-api-access-pkkm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.886318 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a8acb75-f634-47b3-bf63-60b98ea8c688" (UID: "8a8acb75-f634-47b3-bf63-60b98ea8c688"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.914744 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkkm4\" (UniqueName: \"kubernetes.io/projected/8a8acb75-f634-47b3-bf63-60b98ea8c688-kube-api-access-pkkm4\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.914788 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:27 crc kubenswrapper[4835]: I0129 16:21:27.914805 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8acb75-f634-47b3-bf63-60b98ea8c688-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.264269 4835 generic.go:334] "Generic (PLEG): container finished" podID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerID="ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf" exitCode=0 Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.264324 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9q8q" event={"ID":"8a8acb75-f634-47b3-bf63-60b98ea8c688","Type":"ContainerDied","Data":"ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf"} Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.264366 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9q8q" event={"ID":"8a8acb75-f634-47b3-bf63-60b98ea8c688","Type":"ContainerDied","Data":"b6535da73aa6760ef678bfacf488a83fcb091acee04db4f01b55f7185bc14b54"} Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.264402 4835 scope.go:117] "RemoveContainer" containerID="ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.264637 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9q8q" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.280892 4835 scope.go:117] "RemoveContainer" containerID="014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.295795 4835 scope.go:117] "RemoveContainer" containerID="bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.354409 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9q8q"] Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.359129 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g9q8q"] Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.361038 4835 scope.go:117] "RemoveContainer" containerID="ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf" Jan 29 16:21:28 crc kubenswrapper[4835]: E0129 16:21:28.361545 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf\": container with ID starting with ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf not found: ID does not exist" containerID="ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.361574 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf"} err="failed to get container status \"ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf\": rpc error: code = NotFound desc = could not find container \"ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf\": container with ID starting with ff12ddab8f5e16b0848d491def587fd2e794dd26d740130dd3f6b756f1ed7acf not found: ID does not exist" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.361600 4835 scope.go:117] "RemoveContainer" containerID="014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7" Jan 29 16:21:28 crc kubenswrapper[4835]: E0129 16:21:28.362052 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7\": container with ID starting with 014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7 not found: ID does not exist" containerID="014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.362096 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7"} err="failed to get container status \"014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7\": rpc error: code = NotFound desc = could not find container \"014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7\": container with ID starting with 014696db992d95e09df489b3c5766b795fa85774599b2b977b8ab6efa2ba8ab7 not found: ID does not exist" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.362128 4835 scope.go:117] "RemoveContainer" containerID="bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa" Jan 29 16:21:28 crc kubenswrapper[4835]: E0129 16:21:28.362406 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa\": container with ID starting with bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa not found: ID does not exist" containerID="bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.362430 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa"} err="failed to get container status \"bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa\": rpc error: code = NotFound desc = could not find container \"bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa\": container with ID starting with bb232b9eaf433b138059ef3e096523a47f35e33365e354f94382670416c7b8fa not found: ID does not exist" Jan 29 16:21:28 crc kubenswrapper[4835]: E0129 16:21:28.433062 4835 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a8acb75_f634_47b3_bf63_60b98ea8c688.slice/crio-b6535da73aa6760ef678bfacf488a83fcb091acee04db4f01b55f7185bc14b54\": RecentStats: unable to find data in memory cache]" Jan 29 16:21:28 crc kubenswrapper[4835]: I0129 16:21:28.501124 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" path="/var/lib/kubelet/pods/8a8acb75-f634-47b3-bf63-60b98ea8c688/volumes" Jan 29 16:21:30 crc kubenswrapper[4835]: I0129 16:21:30.491961 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:21:30 crc kubenswrapper[4835]: E0129 16:21:30.492216 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:21:43 crc kubenswrapper[4835]: I0129 16:21:43.492252 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:21:43 crc kubenswrapper[4835]: E0129 16:21:43.493093 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:21:57 crc kubenswrapper[4835]: I0129 16:21:57.492364 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:21:57 crc kubenswrapper[4835]: E0129 16:21:57.493166 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.326646 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-splff"] Jan 29 16:22:04 crc kubenswrapper[4835]: E0129 16:22:04.327614 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="extract-content" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.327634 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="extract-content" Jan 29 16:22:04 crc kubenswrapper[4835]: E0129 16:22:04.327650 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="extract-utilities" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.327658 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="extract-utilities" Jan 29 16:22:04 crc kubenswrapper[4835]: E0129 16:22:04.327684 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="registry-server" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.327692 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="registry-server" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.327868 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8acb75-f634-47b3-bf63-60b98ea8c688" containerName="registry-server" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.329074 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.342418 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-splff"] Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.372571 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-catalog-content\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.372618 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-utilities\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.372673 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9zg\" (UniqueName: \"kubernetes.io/projected/d08665e3-1685-41c2-b7c4-275dbd26e26f-kube-api-access-6k9zg\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.474245 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9zg\" (UniqueName: \"kubernetes.io/projected/d08665e3-1685-41c2-b7c4-275dbd26e26f-kube-api-access-6k9zg\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.474373 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-catalog-content\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.474403 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-utilities\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.474979 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-utilities\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.475076 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-catalog-content\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.495901 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9zg\" (UniqueName: \"kubernetes.io/projected/d08665e3-1685-41c2-b7c4-275dbd26e26f-kube-api-access-6k9zg\") pod \"certified-operators-splff\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:04 crc kubenswrapper[4835]: I0129 16:22:04.650398 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:05 crc kubenswrapper[4835]: I0129 16:22:05.150511 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-splff"] Jan 29 16:22:05 crc kubenswrapper[4835]: I0129 16:22:05.527896 4835 generic.go:334] "Generic (PLEG): container finished" podID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerID="c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030" exitCode=0 Jan 29 16:22:05 crc kubenswrapper[4835]: I0129 16:22:05.527961 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-splff" event={"ID":"d08665e3-1685-41c2-b7c4-275dbd26e26f","Type":"ContainerDied","Data":"c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030"} Jan 29 16:22:05 crc kubenswrapper[4835]: I0129 16:22:05.528029 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-splff" event={"ID":"d08665e3-1685-41c2-b7c4-275dbd26e26f","Type":"ContainerStarted","Data":"238e5d619a54f88852d29130fb1a6b9fd2fb3b7ae56e85e13cd9be2e4b0efad8"} Jan 29 16:22:07 crc kubenswrapper[4835]: I0129 16:22:07.544292 4835 generic.go:334] "Generic (PLEG): container finished" podID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerID="0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524" exitCode=0 Jan 29 16:22:07 crc kubenswrapper[4835]: I0129 16:22:07.544384 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-splff" event={"ID":"d08665e3-1685-41c2-b7c4-275dbd26e26f","Type":"ContainerDied","Data":"0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524"} Jan 29 16:22:08 crc kubenswrapper[4835]: I0129 16:22:08.567600 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-splff" event={"ID":"d08665e3-1685-41c2-b7c4-275dbd26e26f","Type":"ContainerStarted","Data":"6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9"} Jan 29 16:22:08 crc kubenswrapper[4835]: I0129 16:22:08.591596 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-splff" podStartSLOduration=2.156419673 podStartE2EDuration="4.591575465s" podCreationTimestamp="2026-01-29 16:22:04 +0000 UTC" firstStartedPulling="2026-01-29 16:22:05.530003003 +0000 UTC m=+3227.723046657" lastFinishedPulling="2026-01-29 16:22:07.965158795 +0000 UTC m=+3230.158202449" observedRunningTime="2026-01-29 16:22:08.588115826 +0000 UTC m=+3230.781159480" watchObservedRunningTime="2026-01-29 16:22:08.591575465 +0000 UTC m=+3230.784619119" Jan 29 16:22:12 crc kubenswrapper[4835]: I0129 16:22:12.491851 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:22:12 crc kubenswrapper[4835]: E0129 16:22:12.492545 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:22:14 crc kubenswrapper[4835]: I0129 16:22:14.650651 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:14 crc kubenswrapper[4835]: I0129 16:22:14.650697 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:14 crc kubenswrapper[4835]: I0129 16:22:14.692106 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:15 crc kubenswrapper[4835]: I0129 16:22:15.693202 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:15 crc kubenswrapper[4835]: I0129 16:22:15.751870 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-splff"] Jan 29 16:22:17 crc kubenswrapper[4835]: I0129 16:22:17.632368 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-splff" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="registry-server" containerID="cri-o://6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9" gracePeriod=2 Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.026476 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.167939 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-utilities\") pod \"d08665e3-1685-41c2-b7c4-275dbd26e26f\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.168007 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k9zg\" (UniqueName: \"kubernetes.io/projected/d08665e3-1685-41c2-b7c4-275dbd26e26f-kube-api-access-6k9zg\") pod \"d08665e3-1685-41c2-b7c4-275dbd26e26f\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.168083 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-catalog-content\") pod \"d08665e3-1685-41c2-b7c4-275dbd26e26f\" (UID: \"d08665e3-1685-41c2-b7c4-275dbd26e26f\") " Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.170249 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-utilities" (OuterVolumeSpecName: "utilities") pod "d08665e3-1685-41c2-b7c4-275dbd26e26f" (UID: "d08665e3-1685-41c2-b7c4-275dbd26e26f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.174979 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d08665e3-1685-41c2-b7c4-275dbd26e26f-kube-api-access-6k9zg" (OuterVolumeSpecName: "kube-api-access-6k9zg") pod "d08665e3-1685-41c2-b7c4-275dbd26e26f" (UID: "d08665e3-1685-41c2-b7c4-275dbd26e26f"). InnerVolumeSpecName "kube-api-access-6k9zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.270003 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.270041 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k9zg\" (UniqueName: \"kubernetes.io/projected/d08665e3-1685-41c2-b7c4-275dbd26e26f-kube-api-access-6k9zg\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.642012 4835 generic.go:334] "Generic (PLEG): container finished" podID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerID="6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9" exitCode=0 Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.642058 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-splff" event={"ID":"d08665e3-1685-41c2-b7c4-275dbd26e26f","Type":"ContainerDied","Data":"6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9"} Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.642081 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-splff" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.642093 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-splff" event={"ID":"d08665e3-1685-41c2-b7c4-275dbd26e26f","Type":"ContainerDied","Data":"238e5d619a54f88852d29130fb1a6b9fd2fb3b7ae56e85e13cd9be2e4b0efad8"} Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.642118 4835 scope.go:117] "RemoveContainer" containerID="6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.661181 4835 scope.go:117] "RemoveContainer" containerID="0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.681861 4835 scope.go:117] "RemoveContainer" containerID="c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.709720 4835 scope.go:117] "RemoveContainer" containerID="6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9" Jan 29 16:22:18 crc kubenswrapper[4835]: E0129 16:22:18.710216 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9\": container with ID starting with 6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9 not found: ID does not exist" containerID="6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.710252 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9"} err="failed to get container status \"6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9\": rpc error: code = NotFound desc = could not find container \"6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9\": container with ID starting with 6b926a4257aae785f31a17b0c007e0a53cf7b7b3ec3fa0d451ccd530283bd1c9 not found: ID does not exist" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.710280 4835 scope.go:117] "RemoveContainer" containerID="0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524" Jan 29 16:22:18 crc kubenswrapper[4835]: E0129 16:22:18.710707 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524\": container with ID starting with 0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524 not found: ID does not exist" containerID="0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.710737 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524"} err="failed to get container status \"0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524\": rpc error: code = NotFound desc = could not find container \"0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524\": container with ID starting with 0c55ee62a65547536ed529f7b2e14c166f4729f06537435de78437c498e2c524 not found: ID does not exist" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.710758 4835 scope.go:117] "RemoveContainer" containerID="c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030" Jan 29 16:22:18 crc kubenswrapper[4835]: E0129 16:22:18.711197 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030\": container with ID starting with c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030 not found: ID does not exist" containerID="c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.711224 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030"} err="failed to get container status \"c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030\": rpc error: code = NotFound desc = could not find container \"c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030\": container with ID starting with c0fbaa28833690fbc18ef9c340b402ef71bf694fca27d2e446fd5321f69a1030 not found: ID does not exist" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.817944 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d08665e3-1685-41c2-b7c4-275dbd26e26f" (UID: "d08665e3-1685-41c2-b7c4-275dbd26e26f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.879529 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d08665e3-1685-41c2-b7c4-275dbd26e26f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.988283 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-splff"] Jan 29 16:22:18 crc kubenswrapper[4835]: I0129 16:22:18.997147 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-splff"] Jan 29 16:22:20 crc kubenswrapper[4835]: I0129 16:22:20.504151 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" path="/var/lib/kubelet/pods/d08665e3-1685-41c2-b7c4-275dbd26e26f/volumes" Jan 29 16:22:27 crc kubenswrapper[4835]: I0129 16:22:27.492052 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:22:27 crc kubenswrapper[4835]: E0129 16:22:27.492752 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:22:41 crc kubenswrapper[4835]: I0129 16:22:41.492252 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:22:41 crc kubenswrapper[4835]: E0129 16:22:41.492874 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:22:54 crc kubenswrapper[4835]: I0129 16:22:54.491965 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:22:54 crc kubenswrapper[4835]: E0129 16:22:54.492718 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:23:06 crc kubenswrapper[4835]: I0129 16:23:06.492407 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:23:06 crc kubenswrapper[4835]: E0129 16:23:06.496780 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:23:18 crc kubenswrapper[4835]: I0129 16:23:18.495811 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:23:18 crc kubenswrapper[4835]: E0129 16:23:18.496334 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:23:29 crc kubenswrapper[4835]: I0129 16:23:29.492987 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:23:30 crc kubenswrapper[4835]: I0129 16:23:30.185123 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"3d1ca6b7652e348ea60766b730534dfee419193b4c923d3deeddfdeb59239217"} Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.914021 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9rcv"] Jan 29 16:25:21 crc kubenswrapper[4835]: E0129 16:25:21.917058 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="registry-server" Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.923641 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="registry-server" Jan 29 16:25:21 crc kubenswrapper[4835]: E0129 16:25:21.923849 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="extract-content" Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.923937 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="extract-content" Jan 29 16:25:21 crc kubenswrapper[4835]: E0129 16:25:21.924039 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="extract-utilities" Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.924130 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="extract-utilities" Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.924579 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="d08665e3-1685-41c2-b7c4-275dbd26e26f" containerName="registry-server" Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.926031 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:21 crc kubenswrapper[4835]: I0129 16:25:21.928106 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9rcv"] Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.075692 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-catalog-content\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.075764 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9dff\" (UniqueName: \"kubernetes.io/projected/5308561e-a82a-4e21-8b92-543b2a17e534-kube-api-access-b9dff\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.075856 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-utilities\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.177291 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9dff\" (UniqueName: \"kubernetes.io/projected/5308561e-a82a-4e21-8b92-543b2a17e534-kube-api-access-b9dff\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.177512 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-utilities\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.177570 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-catalog-content\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.178113 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-catalog-content\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.178827 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-utilities\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.206732 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9dff\" (UniqueName: \"kubernetes.io/projected/5308561e-a82a-4e21-8b92-543b2a17e534-kube-api-access-b9dff\") pod \"redhat-operators-t9rcv\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.241311 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:22 crc kubenswrapper[4835]: I0129 16:25:22.701960 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9rcv"] Jan 29 16:25:23 crc kubenswrapper[4835]: I0129 16:25:23.043184 4835 generic.go:334] "Generic (PLEG): container finished" podID="5308561e-a82a-4e21-8b92-543b2a17e534" containerID="2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6" exitCode=0 Jan 29 16:25:23 crc kubenswrapper[4835]: I0129 16:25:23.043225 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerDied","Data":"2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6"} Jan 29 16:25:23 crc kubenswrapper[4835]: I0129 16:25:23.043252 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerStarted","Data":"053968d63064ed0a50be5e0e8233ecb035ee5c4564ade149ee92d1e35aae96d2"} Jan 29 16:25:24 crc kubenswrapper[4835]: I0129 16:25:24.055408 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerStarted","Data":"e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8"} Jan 29 16:25:25 crc kubenswrapper[4835]: I0129 16:25:25.063025 4835 generic.go:334] "Generic (PLEG): container finished" podID="5308561e-a82a-4e21-8b92-543b2a17e534" containerID="e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8" exitCode=0 Jan 29 16:25:25 crc kubenswrapper[4835]: I0129 16:25:25.063069 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerDied","Data":"e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8"} Jan 29 16:25:26 crc kubenswrapper[4835]: I0129 16:25:26.071035 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerStarted","Data":"d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7"} Jan 29 16:25:26 crc kubenswrapper[4835]: I0129 16:25:26.089399 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9rcv" podStartSLOduration=2.701069539 podStartE2EDuration="5.089376637s" podCreationTimestamp="2026-01-29 16:25:21 +0000 UTC" firstStartedPulling="2026-01-29 16:25:23.045376614 +0000 UTC m=+3425.238420268" lastFinishedPulling="2026-01-29 16:25:25.433683712 +0000 UTC m=+3427.626727366" observedRunningTime="2026-01-29 16:25:26.086257847 +0000 UTC m=+3428.279301501" watchObservedRunningTime="2026-01-29 16:25:26.089376637 +0000 UTC m=+3428.282420291" Jan 29 16:25:32 crc kubenswrapper[4835]: I0129 16:25:32.241959 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:32 crc kubenswrapper[4835]: I0129 16:25:32.242456 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:32 crc kubenswrapper[4835]: I0129 16:25:32.281655 4835 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:33 crc kubenswrapper[4835]: I0129 16:25:33.162685 4835 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:33 crc kubenswrapper[4835]: I0129 16:25:33.204498 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9rcv"] Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.134500 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9rcv" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="registry-server" containerID="cri-o://d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7" gracePeriod=2 Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.528277 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.676941 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-utilities\") pod \"5308561e-a82a-4e21-8b92-543b2a17e534\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.677020 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-catalog-content\") pod \"5308561e-a82a-4e21-8b92-543b2a17e534\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.677085 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9dff\" (UniqueName: \"kubernetes.io/projected/5308561e-a82a-4e21-8b92-543b2a17e534-kube-api-access-b9dff\") pod \"5308561e-a82a-4e21-8b92-543b2a17e534\" (UID: \"5308561e-a82a-4e21-8b92-543b2a17e534\") " Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.678329 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-utilities" (OuterVolumeSpecName: "utilities") pod "5308561e-a82a-4e21-8b92-543b2a17e534" (UID: "5308561e-a82a-4e21-8b92-543b2a17e534"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.686239 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5308561e-a82a-4e21-8b92-543b2a17e534-kube-api-access-b9dff" (OuterVolumeSpecName: "kube-api-access-b9dff") pod "5308561e-a82a-4e21-8b92-543b2a17e534" (UID: "5308561e-a82a-4e21-8b92-543b2a17e534"). InnerVolumeSpecName "kube-api-access-b9dff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.779215 4835 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.779250 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9dff\" (UniqueName: \"kubernetes.io/projected/5308561e-a82a-4e21-8b92-543b2a17e534-kube-api-access-b9dff\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.814088 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5308561e-a82a-4e21-8b92-543b2a17e534" (UID: "5308561e-a82a-4e21-8b92-543b2a17e534"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:25:35 crc kubenswrapper[4835]: I0129 16:25:35.883618 4835 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5308561e-a82a-4e21-8b92-543b2a17e534-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.142070 4835 generic.go:334] "Generic (PLEG): container finished" podID="5308561e-a82a-4e21-8b92-543b2a17e534" containerID="d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7" exitCode=0 Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.142114 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerDied","Data":"d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7"} Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.142141 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9rcv" event={"ID":"5308561e-a82a-4e21-8b92-543b2a17e534","Type":"ContainerDied","Data":"053968d63064ed0a50be5e0e8233ecb035ee5c4564ade149ee92d1e35aae96d2"} Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.142157 4835 scope.go:117] "RemoveContainer" containerID="d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.142304 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9rcv" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.169132 4835 scope.go:117] "RemoveContainer" containerID="e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.180520 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9rcv"] Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.189387 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9rcv"] Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.195441 4835 scope.go:117] "RemoveContainer" containerID="2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.215274 4835 scope.go:117] "RemoveContainer" containerID="d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7" Jan 29 16:25:36 crc kubenswrapper[4835]: E0129 16:25:36.215738 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7\": container with ID starting with d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7 not found: ID does not exist" containerID="d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.215779 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7"} err="failed to get container status \"d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7\": rpc error: code = NotFound desc = could not find container \"d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7\": container with ID starting with d76dea745355eca1322104dcaafc088134a18f00949c28bb67e2f2e460ab2de7 not found: ID does not exist" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.215803 4835 scope.go:117] "RemoveContainer" containerID="e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8" Jan 29 16:25:36 crc kubenswrapper[4835]: E0129 16:25:36.216023 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8\": container with ID starting with e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8 not found: ID does not exist" containerID="e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.216069 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8"} err="failed to get container status \"e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8\": rpc error: code = NotFound desc = could not find container \"e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8\": container with ID starting with e4ce1f82763b08d6c49277f4e6800fdd4af6ce713c6b50776a0bd46b6a3bffb8 not found: ID does not exist" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.216084 4835 scope.go:117] "RemoveContainer" containerID="2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6" Jan 29 16:25:36 crc kubenswrapper[4835]: E0129 16:25:36.216314 4835 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6\": container with ID starting with 2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6 not found: ID does not exist" containerID="2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.216336 4835 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6"} err="failed to get container status \"2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6\": rpc error: code = NotFound desc = could not find container \"2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6\": container with ID starting with 2d13760e167c9f851c05b2cb279530cc5325ab35d8d06874065149357b3657c6 not found: ID does not exist" Jan 29 16:25:36 crc kubenswrapper[4835]: I0129 16:25:36.502859 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" path="/var/lib/kubelet/pods/5308561e-a82a-4e21-8b92-543b2a17e534/volumes" Jan 29 16:25:51 crc kubenswrapper[4835]: I0129 16:25:51.614540 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:25:51 crc kubenswrapper[4835]: I0129 16:25:51.615131 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:26:21 crc kubenswrapper[4835]: I0129 16:26:21.616101 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:26:21 crc kubenswrapper[4835]: I0129 16:26:21.617668 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:26:51 crc kubenswrapper[4835]: I0129 16:26:51.615275 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:26:51 crc kubenswrapper[4835]: I0129 16:26:51.615971 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:26:51 crc kubenswrapper[4835]: I0129 16:26:51.616024 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 16:26:51 crc kubenswrapper[4835]: I0129 16:26:51.616631 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d1ca6b7652e348ea60766b730534dfee419193b4c923d3deeddfdeb59239217"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:26:51 crc kubenswrapper[4835]: I0129 16:26:51.616684 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://3d1ca6b7652e348ea60766b730534dfee419193b4c923d3deeddfdeb59239217" gracePeriod=600 Jan 29 16:26:52 crc kubenswrapper[4835]: I0129 16:26:52.664780 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="3d1ca6b7652e348ea60766b730534dfee419193b4c923d3deeddfdeb59239217" exitCode=0 Jan 29 16:26:52 crc kubenswrapper[4835]: I0129 16:26:52.664854 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"3d1ca6b7652e348ea60766b730534dfee419193b4c923d3deeddfdeb59239217"} Jan 29 16:26:52 crc kubenswrapper[4835]: I0129 16:26:52.665306 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerStarted","Data":"48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3"} Jan 29 16:26:52 crc kubenswrapper[4835]: I0129 16:26:52.665334 4835 scope.go:117] "RemoveContainer" containerID="b8fe062b4c6c22f2625d65b047303f50bd259cac30100a2e52aeff18166f7d4c" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.156178 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tkmcf/must-gather-zvw76"] Jan 29 16:27:08 crc kubenswrapper[4835]: E0129 16:27:08.156975 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="extract-content" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.156987 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="extract-content" Jan 29 16:27:08 crc kubenswrapper[4835]: E0129 16:27:08.157008 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="registry-server" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.157014 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="registry-server" Jan 29 16:27:08 crc kubenswrapper[4835]: E0129 16:27:08.157027 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="extract-utilities" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.157033 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="extract-utilities" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.157166 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="5308561e-a82a-4e21-8b92-543b2a17e534" containerName="registry-server" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.157888 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.159807 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tkmcf"/"kube-root-ca.crt" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.162577 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tkmcf"/"openshift-service-ca.crt" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.169286 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tkmcf/must-gather-zvw76"] Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.303549 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzj9q\" (UniqueName: \"kubernetes.io/projected/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-kube-api-access-tzj9q\") pod \"must-gather-zvw76\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.303609 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-must-gather-output\") pod \"must-gather-zvw76\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.404543 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-must-gather-output\") pod \"must-gather-zvw76\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.404667 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzj9q\" (UniqueName: \"kubernetes.io/projected/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-kube-api-access-tzj9q\") pod \"must-gather-zvw76\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.405709 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-must-gather-output\") pod \"must-gather-zvw76\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.424251 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzj9q\" (UniqueName: \"kubernetes.io/projected/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-kube-api-access-tzj9q\") pod \"must-gather-zvw76\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.480421 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.947937 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tkmcf/must-gather-zvw76"] Jan 29 16:27:08 crc kubenswrapper[4835]: I0129 16:27:08.953229 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:27:09 crc kubenswrapper[4835]: I0129 16:27:09.788885 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tkmcf/must-gather-zvw76" event={"ID":"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64","Type":"ContainerStarted","Data":"2fa7316d670d5edc61babb4b62250d9373b03008b47ebd523afe3632d3927018"} Jan 29 16:27:16 crc kubenswrapper[4835]: I0129 16:27:16.853319 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tkmcf/must-gather-zvw76" event={"ID":"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64","Type":"ContainerStarted","Data":"1bffa605c88ffd761a918816269e8591895b47b266e98c555068d8657620eef2"} Jan 29 16:27:17 crc kubenswrapper[4835]: I0129 16:27:17.861976 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tkmcf/must-gather-zvw76" event={"ID":"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64","Type":"ContainerStarted","Data":"99ea45b723352d2a938114cec5a1c0c6beaad282c7cf429b293985ce169ad5a6"} Jan 29 16:27:17 crc kubenswrapper[4835]: I0129 16:27:17.880208 4835 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tkmcf/must-gather-zvw76" podStartSLOduration=2.259802291 podStartE2EDuration="9.880188363s" podCreationTimestamp="2026-01-29 16:27:08 +0000 UTC" firstStartedPulling="2026-01-29 16:27:08.953197294 +0000 UTC m=+3531.146240948" lastFinishedPulling="2026-01-29 16:27:16.573583366 +0000 UTC m=+3538.766627020" observedRunningTime="2026-01-29 16:27:17.873545814 +0000 UTC m=+3540.066589468" watchObservedRunningTime="2026-01-29 16:27:17.880188363 +0000 UTC m=+3540.073232017" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.254706 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/util/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.421448 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/util/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.461643 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/pull/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.475639 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/pull/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.640585 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/pull/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.640894 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/extract/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.658984 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1zdbdv_2fdc8ed3-9c40-4e5c-81c2-be42a9bea0ff/util/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.888330 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-bm5sf_b19d416a-a9ff-435d-bf52-edd87d12da46/manager/0.log" Jan 29 16:28:19 crc kubenswrapper[4835]: I0129 16:28:19.909692 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-5n2pr_35afdcf5-a791-4fb2-aeea-0200ff95edf5/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.004871 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-ftjdl_21915d77-1cfb-4c42-827a-a918f63224b5/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.141420 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-jgc8x_3276caaa-e664-4dff-961d-0a4121d4c6a1/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.224844 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-x6zhz_5b5ab528-bcc6-47f2-a7f0-798dc4a09acc/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.323391 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-8c998_7cefb6c7-0870-48df-8246-c69e8cbf82c6/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.534532 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-fq8mf_53c52d1f-17be-421f-8f5e-73cf6aefc28d/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.610400 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-t4z4q_adcf669a-6e0c-4cd6-a753-6ab8a5f15a25/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.729739 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-pvll8_514ee3fd-e758-471d-8378-b5b1c566ba36/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.767081 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-6n6sr_0e32b624-7997-4c99-a4d9-dff4da47132e/manager/0.log" Jan 29 16:28:20 crc kubenswrapper[4835]: I0129 16:28:20.955892 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-n6nf2_41e42cf7-1c34-41ad-ab8f-0fd362f01f58/manager/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.057149 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-pbv9r_0337a106-8b4b-41c0-824c-394d50c9ad6d/manager/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.207860 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-g9pk9_d82b7063-6a29-4b42-807d-f49f8881be3e/manager/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.242195 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-dnwhm_c13709c8-6829-4e4f-9acd-46aeaa147049/manager/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.422268 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-86dfb79cc76tzkp_9674c18b-3676-4101-b206-5048d14862af/manager/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.549087 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-757f46c65d-66f9l_c25d2746-9c15-4529-9de9-a62bf919ccce/operator/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.727597 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8n788_13c4685b-af72-42f0-8ba3-1d3ef0e82c87/registry-server/0.log" Jan 29 16:28:21 crc kubenswrapper[4835]: I0129 16:28:21.956061 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-xmd6m_6716e375-4195-40bf-a20b-2a716135e4ef/manager/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.045658 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-gvw9q_a333de09-dfc0-446d-85d1-08f31bf6c540/manager/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.220836 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-b57bv_a5745c9f-9837-415c-ac71-7138f6e3a3cc/operator/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.445378 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-tlvw5_d3d2d6b0-b259-4030-a685-d9e984324c58/manager/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.464441 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b6f655c79-mrx86_538c332f-1a4e-4bf8-b6be-12aae176c5ab/manager/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.501157 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-qsjxl_1b169eb7-1896-40cd-aefd-fc7b69b2bbdf/manager/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.661454 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-bcmcn_2df55346-97f6-4abe-b4f5-4044995f7c2c/manager/0.log" Jan 29 16:28:22 crc kubenswrapper[4835]: I0129 16:28:22.722081 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-rmmtl_16a82779-3964-4e89-ab3f-162ab6ea8141/manager/0.log" Jan 29 16:28:40 crc kubenswrapper[4835]: I0129 16:28:40.091848 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8vm4c_724b0b6d-fe63-47f2-85dd-90d5accd7cf3/control-plane-machine-set-operator/0.log" Jan 29 16:28:40 crc kubenswrapper[4835]: I0129 16:28:40.237902 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g8qkk_f24c4f50-5254-4170-a11d-b8cf49841066/machine-api-operator/0.log" Jan 29 16:28:40 crc kubenswrapper[4835]: I0129 16:28:40.255800 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g8qkk_f24c4f50-5254-4170-a11d-b8cf49841066/kube-rbac-proxy/0.log" Jan 29 16:28:51 crc kubenswrapper[4835]: I0129 16:28:51.421853 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-4d2jn_a5e7bd98-b92f-4603-9151-47c925db8362/cert-manager-controller/0.log" Jan 29 16:28:51 crc kubenswrapper[4835]: I0129 16:28:51.654077 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-f7fvv_62d89b5b-077f-4d52-aaa5-7feab7ea8063/cert-manager-cainjector/0.log" Jan 29 16:28:51 crc kubenswrapper[4835]: I0129 16:28:51.685233 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-45j7x_08165ae4-532a-4357-9f95-1a1aeabe56ab/cert-manager-webhook/0.log" Jan 29 16:29:02 crc kubenswrapper[4835]: I0129 16:29:02.762947 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-2lh69_92a62ad2-e26b-4cfa-b7aa-3d2818800cca/nmstate-console-plugin/0.log" Jan 29 16:29:02 crc kubenswrapper[4835]: I0129 16:29:02.885125 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-l8zn7_38b5bcee-28f4-4710-b03b-afc61a857751/nmstate-handler/0.log" Jan 29 16:29:02 crc kubenswrapper[4835]: I0129 16:29:02.940120 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vtqfs_04026234-d256-48f0-a82f-bbc23b7439d4/nmstate-metrics/0.log" Jan 29 16:29:02 crc kubenswrapper[4835]: I0129 16:29:02.967245 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vtqfs_04026234-d256-48f0-a82f-bbc23b7439d4/kube-rbac-proxy/0.log" Jan 29 16:29:03 crc kubenswrapper[4835]: I0129 16:29:03.073434 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-8zbjq_3f78989b-0a38-49f8-b766-42dcb7742227/nmstate-operator/0.log" Jan 29 16:29:03 crc kubenswrapper[4835]: I0129 16:29:03.156513 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-ltd82_efdfb4e0-5ef9-4bba-80eb-5ce49b592676/nmstate-webhook/0.log" Jan 29 16:29:21 crc kubenswrapper[4835]: I0129 16:29:21.615007 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:29:21 crc kubenswrapper[4835]: I0129 16:29:21.615543 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:29:26 crc kubenswrapper[4835]: I0129 16:29:26.598191 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-b6cql_d0d89751-719b-4841-828a-bbada38fdf17/kube-rbac-proxy/0.log" Jan 29 16:29:26 crc kubenswrapper[4835]: I0129 16:29:26.844504 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-frr-files/0.log" Jan 29 16:29:26 crc kubenswrapper[4835]: I0129 16:29:26.991739 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-b6cql_d0d89751-719b-4841-828a-bbada38fdf17/controller/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.031922 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-reloader/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.038812 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-frr-files/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.041833 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-metrics/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.184122 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-reloader/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.346878 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-metrics/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.357585 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-reloader/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.357699 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-frr-files/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.387086 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-metrics/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.558033 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-reloader/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.561332 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-frr-files/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.604905 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/controller/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.605020 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/cp-metrics/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.737760 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/frr-metrics/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.784809 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/kube-rbac-proxy/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.833986 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/kube-rbac-proxy-frr/0.log" Jan 29 16:29:27 crc kubenswrapper[4835]: I0129 16:29:27.970700 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/reloader/0.log" Jan 29 16:29:28 crc kubenswrapper[4835]: I0129 16:29:28.103190 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-jl5kk_d22521b2-9ba9-4e2b-9dcc-551aeb8c19b5/frr-k8s-webhook-server/0.log" Jan 29 16:29:28 crc kubenswrapper[4835]: I0129 16:29:28.162141 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-85cbcd5479-9qqqm_52588c5c-ffb3-4534-994f-ac105b5cf44d/manager/0.log" Jan 29 16:29:28 crc kubenswrapper[4835]: I0129 16:29:28.390327 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-64dff46b5d-ndsf6_88695349-f8fe-4ca5-9d25-c7c822d23e89/webhook-server/0.log" Jan 29 16:29:28 crc kubenswrapper[4835]: I0129 16:29:28.570294 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cpg6z_d550dcee-c0e2-461b-a5a7-19cf598f3193/kube-rbac-proxy/0.log" Jan 29 16:29:28 crc kubenswrapper[4835]: I0129 16:29:28.952263 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cpg6z_d550dcee-c0e2-461b-a5a7-19cf598f3193/speaker/0.log" Jan 29 16:29:29 crc kubenswrapper[4835]: I0129 16:29:29.037109 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hs7gp_27ff8d5f-5738-4d00-8419-6b16a94826cc/frr/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.425187 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/util/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.585499 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/util/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.645689 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/pull/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.650926 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/pull/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.817990 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/util/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.821975 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/pull/0.log" Jan 29 16:29:40 crc kubenswrapper[4835]: I0129 16:29:40.836349 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnqkrl_fca125aa-5480-4cd7-b41e-e55568e06ffd/extract/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.037563 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/util/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.158105 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/util/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.220246 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/pull/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.241562 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/pull/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.406398 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/util/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.407006 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/pull/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.425637 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713jpbwm_d892ba7a-74a8-4d73-942f-0eb27888b77f/extract/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.556320 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/util/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.744870 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/util/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.776459 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/pull/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.782676 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/pull/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.944250 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/util/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.953559 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/pull/0.log" Jan 29 16:29:41 crc kubenswrapper[4835]: I0129 16:29:41.986314 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5982c2_c34b77b3-bdfd-45de-b618-c998472f00de/extract/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.121668 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/extract-utilities/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.305756 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/extract-content/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.328461 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/extract-utilities/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.341577 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/extract-content/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.524084 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/extract-content/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.524161 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/extract-utilities/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.746409 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/extract-utilities/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.944378 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/extract-utilities/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.957126 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/extract-content/0.log" Jan 29 16:29:42 crc kubenswrapper[4835]: I0129 16:29:42.976485 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/extract-content/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.136709 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xsmv7_30b77bb8-df69-4504-ae68-5e327d4a3992/registry-server/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.181154 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/extract-utilities/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.191730 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/extract-content/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.378856 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sl2bp_9c7d43b2-a628-4671-8901-170372171693/marketplace-operator/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.583965 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/extract-utilities/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.733666 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9pqfh_1b2510b4-6dfe-4752-be13-cf4693bc139c/registry-server/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.861592 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/extract-content/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.869990 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/extract-content/0.log" Jan 29 16:29:43 crc kubenswrapper[4835]: I0129 16:29:43.876731 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/extract-utilities/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.103006 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/extract-utilities/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.148073 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/extract-content/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.224635 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z8mjw_4fa95b30-fb32-42f1-a406-14562750b5c9/registry-server/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.285802 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/extract-utilities/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.462084 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/extract-utilities/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.474620 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/extract-content/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.491555 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/extract-content/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.667259 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/extract-utilities/0.log" Jan 29 16:29:44 crc kubenswrapper[4835]: I0129 16:29:44.682587 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/extract-content/0.log" Jan 29 16:29:45 crc kubenswrapper[4835]: I0129 16:29:45.244397 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k5k76_44dfd56f-fd32-439c-9b01-0405d1947663/registry-server/0.log" Jan 29 16:29:51 crc kubenswrapper[4835]: I0129 16:29:51.614872 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:29:51 crc kubenswrapper[4835]: I0129 16:29:51.615633 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.157937 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f"] Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.159369 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.176829 4835 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.176859 4835 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.189332 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f"] Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.225043 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-secret-volume\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.225116 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbbr\" (UniqueName: \"kubernetes.io/projected/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-kube-api-access-nsbbr\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.225136 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-config-volume\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.326597 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-secret-volume\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.326668 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsbbr\" (UniqueName: \"kubernetes.io/projected/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-kube-api-access-nsbbr\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.326692 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-config-volume\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.327906 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-config-volume\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.334004 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-secret-volume\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.357791 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsbbr\" (UniqueName: \"kubernetes.io/projected/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-kube-api-access-nsbbr\") pod \"collect-profiles-29495070-pdk9f\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.493085 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:00 crc kubenswrapper[4835]: I0129 16:30:00.915236 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f"] Jan 29 16:30:01 crc kubenswrapper[4835]: I0129 16:30:01.890748 4835 generic.go:334] "Generic (PLEG): container finished" podID="3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" containerID="45e4a5739adc4852826714f1bcd7780b4c6cb57a404582de0c887803f9c3cbf1" exitCode=0 Jan 29 16:30:01 crc kubenswrapper[4835]: I0129 16:30:01.890801 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" event={"ID":"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8","Type":"ContainerDied","Data":"45e4a5739adc4852826714f1bcd7780b4c6cb57a404582de0c887803f9c3cbf1"} Jan 29 16:30:01 crc kubenswrapper[4835]: I0129 16:30:01.891045 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" event={"ID":"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8","Type":"ContainerStarted","Data":"3bbd04c94fac31b744e386977e2688db2c6cf72210ebfab389c1e5a1963c07c8"} Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.147981 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.269887 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-config-volume\") pod \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.269995 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-secret-volume\") pod \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.270139 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsbbr\" (UniqueName: \"kubernetes.io/projected/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-kube-api-access-nsbbr\") pod \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\" (UID: \"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8\") " Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.271771 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" (UID: "3c47ba02-4f8b-4ec5-af6f-d950c3988eb8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.276663 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-kube-api-access-nsbbr" (OuterVolumeSpecName: "kube-api-access-nsbbr") pod "3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" (UID: "3c47ba02-4f8b-4ec5-af6f-d950c3988eb8"). InnerVolumeSpecName "kube-api-access-nsbbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.277654 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" (UID: "3c47ba02-4f8b-4ec5-af6f-d950c3988eb8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.372091 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsbbr\" (UniqueName: \"kubernetes.io/projected/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-kube-api-access-nsbbr\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.372155 4835 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.372168 4835 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c47ba02-4f8b-4ec5-af6f-d950c3988eb8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.905787 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" event={"ID":"3c47ba02-4f8b-4ec5-af6f-d950c3988eb8","Type":"ContainerDied","Data":"3bbd04c94fac31b744e386977e2688db2c6cf72210ebfab389c1e5a1963c07c8"} Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.906069 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bbd04c94fac31b744e386977e2688db2c6cf72210ebfab389c1e5a1963c07c8" Jan 29 16:30:03 crc kubenswrapper[4835]: I0129 16:30:03.905823 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29495070-pdk9f" Jan 29 16:30:04 crc kubenswrapper[4835]: I0129 16:30:04.225902 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82"] Jan 29 16:30:04 crc kubenswrapper[4835]: I0129 16:30:04.231171 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29495025-kvn82"] Jan 29 16:30:04 crc kubenswrapper[4835]: I0129 16:30:04.501596 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905028c3-8935-4143-9c14-3550f391ca82" path="/var/lib/kubelet/pods/905028c3-8935-4143-9c14-3550f391ca82/volumes" Jan 29 16:30:21 crc kubenswrapper[4835]: I0129 16:30:21.615203 4835 patch_prober.go:28] interesting pod/machine-config-daemon-gpcs8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 16:30:21 crc kubenswrapper[4835]: I0129 16:30:21.617100 4835 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 16:30:21 crc kubenswrapper[4835]: I0129 16:30:21.617288 4835 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" Jan 29 16:30:21 crc kubenswrapper[4835]: I0129 16:30:21.618094 4835 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3"} pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 16:30:21 crc kubenswrapper[4835]: I0129 16:30:21.618255 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" containerName="machine-config-daemon" containerID="cri-o://48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" gracePeriod=600 Jan 29 16:30:21 crc kubenswrapper[4835]: E0129 16:30:21.742311 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:30:22 crc kubenswrapper[4835]: I0129 16:30:22.035986 4835 generic.go:334] "Generic (PLEG): container finished" podID="82353ab9-3930-4794-8182-a1e0948778df" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" exitCode=0 Jan 29 16:30:22 crc kubenswrapper[4835]: I0129 16:30:22.036327 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" event={"ID":"82353ab9-3930-4794-8182-a1e0948778df","Type":"ContainerDied","Data":"48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3"} Jan 29 16:30:22 crc kubenswrapper[4835]: I0129 16:30:22.036365 4835 scope.go:117] "RemoveContainer" containerID="3d1ca6b7652e348ea60766b730534dfee419193b4c923d3deeddfdeb59239217" Jan 29 16:30:22 crc kubenswrapper[4835]: I0129 16:30:22.037158 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:30:22 crc kubenswrapper[4835]: E0129 16:30:22.037404 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:30:33 crc kubenswrapper[4835]: I0129 16:30:33.492895 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:30:33 crc kubenswrapper[4835]: E0129 16:30:33.493729 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:30:44 crc kubenswrapper[4835]: I0129 16:30:44.045798 4835 scope.go:117] "RemoveContainer" containerID="41d0c94c66b10e93048a7093d48f121ff3238c0f71e4726cae6e37bfb40953f0" Jan 29 16:30:45 crc kubenswrapper[4835]: I0129 16:30:45.491916 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:30:45 crc kubenswrapper[4835]: E0129 16:30:45.492428 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:30:57 crc kubenswrapper[4835]: I0129 16:30:57.491613 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:30:57 crc kubenswrapper[4835]: E0129 16:30:57.492262 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:30:59 crc kubenswrapper[4835]: I0129 16:30:59.282295 4835 generic.go:334] "Generic (PLEG): container finished" podID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerID="1bffa605c88ffd761a918816269e8591895b47b266e98c555068d8657620eef2" exitCode=0 Jan 29 16:30:59 crc kubenswrapper[4835]: I0129 16:30:59.282400 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tkmcf/must-gather-zvw76" event={"ID":"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64","Type":"ContainerDied","Data":"1bffa605c88ffd761a918816269e8591895b47b266e98c555068d8657620eef2"} Jan 29 16:30:59 crc kubenswrapper[4835]: I0129 16:30:59.283184 4835 scope.go:117] "RemoveContainer" containerID="1bffa605c88ffd761a918816269e8591895b47b266e98c555068d8657620eef2" Jan 29 16:30:59 crc kubenswrapper[4835]: I0129 16:30:59.723394 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tkmcf_must-gather-zvw76_4e8568d5-fda8-4eb7-9bcc-133c96d2ad64/gather/0.log" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.002453 4835 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tkmcf/must-gather-zvw76"] Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.003236 4835 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tkmcf/must-gather-zvw76" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="copy" containerID="cri-o://99ea45b723352d2a938114cec5a1c0c6beaad282c7cf429b293985ce169ad5a6" gracePeriod=2 Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.011725 4835 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tkmcf/must-gather-zvw76"] Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.339347 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tkmcf_must-gather-zvw76_4e8568d5-fda8-4eb7-9bcc-133c96d2ad64/copy/0.log" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.339990 4835 generic.go:334] "Generic (PLEG): container finished" podID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerID="99ea45b723352d2a938114cec5a1c0c6beaad282c7cf429b293985ce169ad5a6" exitCode=143 Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.340043 4835 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa7316d670d5edc61babb4b62250d9373b03008b47ebd523afe3632d3927018" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.364994 4835 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tkmcf_must-gather-zvw76_4e8568d5-fda8-4eb7-9bcc-133c96d2ad64/copy/0.log" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.365361 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.523401 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzj9q\" (UniqueName: \"kubernetes.io/projected/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-kube-api-access-tzj9q\") pod \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.523662 4835 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-must-gather-output\") pod \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\" (UID: \"4e8568d5-fda8-4eb7-9bcc-133c96d2ad64\") " Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.528820 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-kube-api-access-tzj9q" (OuterVolumeSpecName: "kube-api-access-tzj9q") pod "4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" (UID: "4e8568d5-fda8-4eb7-9bcc-133c96d2ad64"). InnerVolumeSpecName "kube-api-access-tzj9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.622204 4835 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" (UID: "4e8568d5-fda8-4eb7-9bcc-133c96d2ad64"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.625447 4835 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzj9q\" (UniqueName: \"kubernetes.io/projected/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-kube-api-access-tzj9q\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:07 crc kubenswrapper[4835]: I0129 16:31:07.625515 4835 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 16:31:08 crc kubenswrapper[4835]: I0129 16:31:08.345503 4835 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tkmcf/must-gather-zvw76" Jan 29 16:31:08 crc kubenswrapper[4835]: I0129 16:31:08.501321 4835 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" path="/var/lib/kubelet/pods/4e8568d5-fda8-4eb7-9bcc-133c96d2ad64/volumes" Jan 29 16:31:10 crc kubenswrapper[4835]: I0129 16:31:10.491847 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:31:10 crc kubenswrapper[4835]: E0129 16:31:10.492173 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:31:21 crc kubenswrapper[4835]: I0129 16:31:21.494937 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:31:21 crc kubenswrapper[4835]: E0129 16:31:21.495748 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:31:32 crc kubenswrapper[4835]: I0129 16:31:32.492834 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:31:32 crc kubenswrapper[4835]: E0129 16:31:32.493990 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:31:43 crc kubenswrapper[4835]: I0129 16:31:43.491999 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:31:43 crc kubenswrapper[4835]: E0129 16:31:43.493073 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:31:54 crc kubenswrapper[4835]: I0129 16:31:54.492385 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:31:54 crc kubenswrapper[4835]: E0129 16:31:54.493255 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:32:08 crc kubenswrapper[4835]: I0129 16:32:08.497742 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:32:08 crc kubenswrapper[4835]: E0129 16:32:08.498730 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:32:20 crc kubenswrapper[4835]: I0129 16:32:20.493384 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:32:20 crc kubenswrapper[4835]: E0129 16:32:20.494402 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:32:33 crc kubenswrapper[4835]: I0129 16:32:33.491767 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:32:33 crc kubenswrapper[4835]: E0129 16:32:33.493388 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:32:45 crc kubenswrapper[4835]: I0129 16:32:45.492009 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:32:45 crc kubenswrapper[4835]: E0129 16:32:45.492517 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.017209 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ff6dk"] Jan 29 16:32:48 crc kubenswrapper[4835]: E0129 16:32:48.017853 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" containerName="collect-profiles" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.017866 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" containerName="collect-profiles" Jan 29 16:32:48 crc kubenswrapper[4835]: E0129 16:32:48.017882 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="copy" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.017888 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="copy" Jan 29 16:32:48 crc kubenswrapper[4835]: E0129 16:32:48.017918 4835 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="gather" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.017924 4835 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="gather" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.018049 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="gather" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.018062 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e8568d5-fda8-4eb7-9bcc-133c96d2ad64" containerName="copy" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.018070 4835 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c47ba02-4f8b-4ec5-af6f-d950c3988eb8" containerName="collect-profiles" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.019006 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.036954 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ff6dk"] Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.095769 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvlh6\" (UniqueName: \"kubernetes.io/projected/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-kube-api-access-xvlh6\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.095817 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-utilities\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.095986 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-catalog-content\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.197873 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvlh6\" (UniqueName: \"kubernetes.io/projected/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-kube-api-access-xvlh6\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.198185 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-utilities\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.198374 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-catalog-content\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.198791 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-utilities\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.198820 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-catalog-content\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.216795 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvlh6\" (UniqueName: \"kubernetes.io/projected/28d8df57-7901-4ed8-b8f4-3ef4d4e1038a-kube-api-access-xvlh6\") pod \"certified-operators-ff6dk\" (UID: \"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a\") " pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.341364 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ff6dk" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.822749 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qw7pf"] Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.824514 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.829851 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qw7pf"] Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.838792 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ff6dk"] Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.909392 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hp4n\" (UniqueName: \"kubernetes.io/projected/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-kube-api-access-7hp4n\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.909507 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-catalog-content\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:48 crc kubenswrapper[4835]: I0129 16:32:48.909537 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-utilities\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.010418 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-utilities\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.010556 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hp4n\" (UniqueName: \"kubernetes.io/projected/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-kube-api-access-7hp4n\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.010633 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-catalog-content\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.011643 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-utilities\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.011680 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-catalog-content\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.029922 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hp4n\" (UniqueName: \"kubernetes.io/projected/5d5c58ef-6235-40c6-b12b-2e2e6334e2b2-kube-api-access-7hp4n\") pod \"community-operators-qw7pf\" (UID: \"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2\") " pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.079260 4835 generic.go:334] "Generic (PLEG): container finished" podID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" containerID="87c73e76807e51ec3b004d5f56ca812c1e33a6592309f9b2903c13fd1a9e303c" exitCode=0 Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.079612 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ff6dk" event={"ID":"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a","Type":"ContainerDied","Data":"87c73e76807e51ec3b004d5f56ca812c1e33a6592309f9b2903c13fd1a9e303c"} Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.079649 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ff6dk" event={"ID":"28d8df57-7901-4ed8-b8f4-3ef4d4e1038a","Type":"ContainerStarted","Data":"9d484e91fb5b4fe1d75719cccf690f5fc0fb98202fd5e7ff3085033effe9e1d8"} Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.082772 4835 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.140914 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qw7pf" Jan 29 16:32:49 crc kubenswrapper[4835]: E0129 16:32:49.212041 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:32:49 crc kubenswrapper[4835]: E0129 16:32:49.212192 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ff6dk_openshift-marketplace(28d8df57-7901-4ed8-b8f4-3ef4d4e1038a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:49 crc kubenswrapper[4835]: E0129 16:32:49.213577 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:32:49 crc kubenswrapper[4835]: I0129 16:32:49.657374 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qw7pf"] Jan 29 16:32:50 crc kubenswrapper[4835]: I0129 16:32:50.086708 4835 generic.go:334] "Generic (PLEG): container finished" podID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" containerID="4da4c548b690f6dc87d6bc67b4562a721a4c0d18fa9327614389ae0412da0112" exitCode=0 Jan 29 16:32:50 crc kubenswrapper[4835]: I0129 16:32:50.086812 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw7pf" event={"ID":"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2","Type":"ContainerDied","Data":"4da4c548b690f6dc87d6bc67b4562a721a4c0d18fa9327614389ae0412da0112"} Jan 29 16:32:50 crc kubenswrapper[4835]: I0129 16:32:50.086867 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qw7pf" event={"ID":"5d5c58ef-6235-40c6-b12b-2e2e6334e2b2","Type":"ContainerStarted","Data":"e4417c1a80eff26326f383f69fa69e32e9a2374dd9dc2a678ddb6edc18eaed3b"} Jan 29 16:32:50 crc kubenswrapper[4835]: E0129 16:32:50.088132 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:32:50 crc kubenswrapper[4835]: E0129 16:32:50.210512 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:32:50 crc kubenswrapper[4835]: E0129 16:32:50.210706 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qw7pf_openshift-marketplace(5d5c58ef-6235-40c6-b12b-2e2e6334e2b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:50 crc kubenswrapper[4835]: E0129 16:32:50.211911 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.017576 4835 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jtjh8"] Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.022933 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.030267 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtjh8"] Jan 29 16:32:51 crc kubenswrapper[4835]: E0129 16:32:51.096815 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.146809 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnpf8\" (UniqueName: \"kubernetes.io/projected/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-kube-api-access-wnpf8\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.147138 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-utilities\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.147254 4835 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-catalog-content\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.248539 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-utilities\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.248605 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-catalog-content\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.248694 4835 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnpf8\" (UniqueName: \"kubernetes.io/projected/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-kube-api-access-wnpf8\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.249121 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-utilities\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.249141 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-catalog-content\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.291915 4835 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnpf8\" (UniqueName: \"kubernetes.io/projected/5c37ef82-d2cb-4850-ba02-6f7bc0ad1054-kube-api-access-wnpf8\") pod \"redhat-marketplace-jtjh8\" (UID: \"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054\") " pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.340582 4835 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jtjh8" Jan 29 16:32:51 crc kubenswrapper[4835]: I0129 16:32:51.578363 4835 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jtjh8"] Jan 29 16:32:52 crc kubenswrapper[4835]: I0129 16:32:52.104011 4835 generic.go:334] "Generic (PLEG): container finished" podID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" containerID="43efe4f1dd59ac13cf44c18271463ef9dd6f16c8c6cbbcf915ded51c3ddc6a9d" exitCode=0 Jan 29 16:32:52 crc kubenswrapper[4835]: I0129 16:32:52.104093 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtjh8" event={"ID":"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054","Type":"ContainerDied","Data":"43efe4f1dd59ac13cf44c18271463ef9dd6f16c8c6cbbcf915ded51c3ddc6a9d"} Jan 29 16:32:52 crc kubenswrapper[4835]: I0129 16:32:52.104349 4835 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jtjh8" event={"ID":"5c37ef82-d2cb-4850-ba02-6f7bc0ad1054","Type":"ContainerStarted","Data":"6a14f6c17634f134b14888f89d72996516dd6ff206aa3929d550db23ae05630d"} Jan 29 16:32:52 crc kubenswrapper[4835]: E0129 16:32:52.239365 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:32:52 crc kubenswrapper[4835]: E0129 16:32:52.239530 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnpf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jtjh8_openshift-marketplace(5c37ef82-d2cb-4850-ba02-6f7bc0ad1054): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:32:52 crc kubenswrapper[4835]: E0129 16:32:52.241083 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:32:53 crc kubenswrapper[4835]: E0129 16:32:53.116397 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:32:58 crc kubenswrapper[4835]: I0129 16:32:58.497093 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:32:58 crc kubenswrapper[4835]: E0129 16:32:58.498217 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:33:02 crc kubenswrapper[4835]: E0129 16:33:02.623417 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:33:02 crc kubenswrapper[4835]: E0129 16:33:02.624134 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qw7pf_openshift-marketplace(5d5c58ef-6235-40c6-b12b-2e2e6334e2b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:02 crc kubenswrapper[4835]: E0129 16:33:02.626573 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:33:04 crc kubenswrapper[4835]: E0129 16:33:04.626095 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:33:04 crc kubenswrapper[4835]: E0129 16:33:04.626289 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ff6dk_openshift-marketplace(28d8df57-7901-4ed8-b8f4-3ef4d4e1038a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:04 crc kubenswrapper[4835]: E0129 16:33:04.627555 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:33:08 crc kubenswrapper[4835]: E0129 16:33:08.628043 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:33:08 crc kubenswrapper[4835]: E0129 16:33:08.628731 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnpf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jtjh8_openshift-marketplace(5c37ef82-d2cb-4850-ba02-6f7bc0ad1054): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:08 crc kubenswrapper[4835]: E0129 16:33:08.630210 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:33:10 crc kubenswrapper[4835]: I0129 16:33:10.491456 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:33:10 crc kubenswrapper[4835]: E0129 16:33:10.491688 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:33:16 crc kubenswrapper[4835]: E0129 16:33:16.495547 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:33:18 crc kubenswrapper[4835]: E0129 16:33:18.503109 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:33:21 crc kubenswrapper[4835]: I0129 16:33:21.492875 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:33:21 crc kubenswrapper[4835]: E0129 16:33:21.493483 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:33:22 crc kubenswrapper[4835]: E0129 16:33:22.494324 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:33:30 crc kubenswrapper[4835]: E0129 16:33:30.640046 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:33:30 crc kubenswrapper[4835]: E0129 16:33:30.640872 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qw7pf_openshift-marketplace(5d5c58ef-6235-40c6-b12b-2e2e6334e2b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:30 crc kubenswrapper[4835]: E0129 16:33:30.641962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:33:32 crc kubenswrapper[4835]: E0129 16:33:32.627239 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:33:32 crc kubenswrapper[4835]: E0129 16:33:32.627907 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ff6dk_openshift-marketplace(28d8df57-7901-4ed8-b8f4-3ef4d4e1038a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:32 crc kubenswrapper[4835]: E0129 16:33:32.629067 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:33:34 crc kubenswrapper[4835]: E0129 16:33:34.623440 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:33:34 crc kubenswrapper[4835]: E0129 16:33:34.623921 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnpf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jtjh8_openshift-marketplace(5c37ef82-d2cb-4850-ba02-6f7bc0ad1054): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:33:34 crc kubenswrapper[4835]: E0129 16:33:34.625184 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:33:35 crc kubenswrapper[4835]: I0129 16:33:35.492313 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:33:35 crc kubenswrapper[4835]: E0129 16:33:35.492764 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:33:44 crc kubenswrapper[4835]: I0129 16:33:44.114312 4835 scope.go:117] "RemoveContainer" containerID="99ea45b723352d2a938114cec5a1c0c6beaad282c7cf429b293985ce169ad5a6" Jan 29 16:33:44 crc kubenswrapper[4835]: I0129 16:33:44.136888 4835 scope.go:117] "RemoveContainer" containerID="1bffa605c88ffd761a918816269e8591895b47b266e98c555068d8657620eef2" Jan 29 16:33:45 crc kubenswrapper[4835]: E0129 16:33:45.495313 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:33:47 crc kubenswrapper[4835]: E0129 16:33:47.494315 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:33:49 crc kubenswrapper[4835]: I0129 16:33:49.492090 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:33:49 crc kubenswrapper[4835]: E0129 16:33:49.492351 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:33:49 crc kubenswrapper[4835]: E0129 16:33:49.501211 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:33:59 crc kubenswrapper[4835]: E0129 16:33:59.495344 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:34:01 crc kubenswrapper[4835]: E0129 16:34:01.493503 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:34:04 crc kubenswrapper[4835]: I0129 16:34:04.492322 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:34:04 crc kubenswrapper[4835]: E0129 16:34:04.493951 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:34:04 crc kubenswrapper[4835]: E0129 16:34:04.493962 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:34:10 crc kubenswrapper[4835]: E0129 16:34:10.494656 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:34:16 crc kubenswrapper[4835]: E0129 16:34:16.650199 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 16:34:16 crc kubenswrapper[4835]: E0129 16:34:16.650864 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvlh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ff6dk_openshift-marketplace(28d8df57-7901-4ed8-b8f4-3ef4d4e1038a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:16 crc kubenswrapper[4835]: E0129 16:34:16.652667 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:34:17 crc kubenswrapper[4835]: I0129 16:34:17.492330 4835 scope.go:117] "RemoveContainer" containerID="48159f87b022feb7dc8ae2a0377eda2ac3e61136c821cef023a588bbac0faea3" Jan 29 16:34:17 crc kubenswrapper[4835]: E0129 16:34:17.492657 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gpcs8_openshift-machine-config-operator(82353ab9-3930-4794-8182-a1e0948778df)\"" pod="openshift-machine-config-operator/machine-config-daemon-gpcs8" podUID="82353ab9-3930-4794-8182-a1e0948778df" Jan 29 16:34:17 crc kubenswrapper[4835]: E0129 16:34:17.609439 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 16:34:17 crc kubenswrapper[4835]: E0129 16:34:17.609598 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnpf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jtjh8_openshift-marketplace(5c37ef82-d2cb-4850-ba02-6f7bc0ad1054): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:17 crc kubenswrapper[4835]: E0129 16:34:17.610851 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054" Jan 29 16:34:24 crc kubenswrapper[4835]: E0129 16:34:24.621080 4835 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 16:34:24 crc kubenswrapper[4835]: E0129 16:34:24.621771 4835 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7hp4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qw7pf_openshift-marketplace(5d5c58ef-6235-40c6-b12b-2e2e6334e2b2): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 16:34:24 crc kubenswrapper[4835]: E0129 16:34:24.623278 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-qw7pf" podUID="5d5c58ef-6235-40c6-b12b-2e2e6334e2b2" Jan 29 16:34:29 crc kubenswrapper[4835]: E0129 16:34:29.494460 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ff6dk" podUID="28d8df57-7901-4ed8-b8f4-3ef4d4e1038a" Jan 29 16:34:29 crc kubenswrapper[4835]: E0129 16:34:29.495413 4835 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jtjh8" podUID="5c37ef82-d2cb-4850-ba02-6f7bc0ad1054"